Oct 09 09:31:56 localhost kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct 09 09:31:56 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct 09 09:31:56 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 09 09:31:56 localhost kernel: BIOS-provided physical RAM map:
Oct 09 09:31:56 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct 09 09:31:56 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct 09 09:31:56 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct 09 09:31:56 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable
Oct 09 09:31:56 localhost kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved
Oct 09 09:31:56 localhost kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved
Oct 09 09:31:56 localhost kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved
Oct 09 09:31:56 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct 09 09:31:56 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct 09 09:31:56 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000027fffffff] usable
Oct 09 09:31:56 localhost kernel: NX (Execute Disable) protection: active
Oct 09 09:31:56 localhost kernel: APIC: Static calls initialized
Oct 09 09:31:56 localhost kernel: SMBIOS 2.8 present.
Oct 09 09:31:56 localhost kernel: DMI: Red Hat OpenStack Compute/RHEL, BIOS 1.16.1-1.el9 04/01/2014
Oct 09 09:31:56 localhost kernel: Hypervisor detected: KVM
Oct 09 09:31:56 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct 09 09:31:56 localhost kernel: kvm-clock: using sched offset of 1895964190071 cycles
Oct 09 09:31:56 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct 09 09:31:56 localhost kernel: tsc: Detected 2445.406 MHz processor
Oct 09 09:31:56 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 09 09:31:56 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 09 09:31:56 localhost kernel: last_pfn = 0x280000 max_arch_pfn = 0x400000000
Oct 09 09:31:56 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct 09 09:31:56 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct 09 09:31:56 localhost kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000
Oct 09 09:31:56 localhost kernel: found SMP MP-table at [mem 0x000f5b60-0x000f5b6f]
Oct 09 09:31:56 localhost kernel: Using GB pages for direct mapping
Oct 09 09:31:56 localhost kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct 09 09:31:56 localhost kernel: ACPI: Early table checksum verification disabled
Oct 09 09:31:56 localhost kernel: ACPI: RSDP 0x00000000000F5B20 000014 (v00 BOCHS )
Oct 09 09:31:56 localhost kernel: ACPI: RSDT 0x000000007FFE35EB 000034 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 09:31:56 localhost kernel: ACPI: FACP 0x000000007FFE3403 0000F4 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 09:31:56 localhost kernel: ACPI: DSDT 0x000000007FFDFCC0 003743 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 09:31:56 localhost kernel: ACPI: FACS 0x000000007FFDFC80 000040
Oct 09 09:31:56 localhost kernel: ACPI: APIC 0x000000007FFE34F7 000090 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 09:31:56 localhost kernel: ACPI: MCFG 0x000000007FFE3587 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 09:31:56 localhost kernel: ACPI: WAET 0x000000007FFE35C3 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct 09 09:31:56 localhost kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe3403-0x7ffe34f6]
Oct 09 09:31:56 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfcc0-0x7ffe3402]
Oct 09 09:31:56 localhost kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfc80-0x7ffdfcbf]
Oct 09 09:31:56 localhost kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe34f7-0x7ffe3586]
Oct 09 09:31:56 localhost kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe3587-0x7ffe35c2]
Oct 09 09:31:56 localhost kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe35c3-0x7ffe35ea]
Oct 09 09:31:56 localhost kernel: No NUMA configuration found
Oct 09 09:31:56 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000027fffffff]
Oct 09 09:31:56 localhost kernel: NODE_DATA(0) allocated [mem 0x27ffd5000-0x27fffffff]
Oct 09 09:31:56 localhost kernel: crashkernel reserved: 0x000000006f000000 - 0x000000007f000000 (256 MB)
Oct 09 09:31:56 localhost kernel: Zone ranges:
Oct 09 09:31:56 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct 09 09:31:56 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct 09 09:31:56 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000027fffffff]
Oct 09 09:31:56 localhost kernel:   Device   empty
Oct 09 09:31:56 localhost kernel: Movable zone start for each node
Oct 09 09:31:56 localhost kernel: Early memory node ranges
Oct 09 09:31:56 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct 09 09:31:56 localhost kernel:   node   0: [mem 0x0000000000100000-0x000000007ffdafff]
Oct 09 09:31:56 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000027fffffff]
Oct 09 09:31:56 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000027fffffff]
Oct 09 09:31:56 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct 09 09:31:56 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct 09 09:31:56 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct 09 09:31:56 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Oct 09 09:31:56 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct 09 09:31:56 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct 09 09:31:56 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct 09 09:31:56 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct 09 09:31:56 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct 09 09:31:56 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct 09 09:31:56 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct 09 09:31:56 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct 09 09:31:56 localhost kernel: TSC deadline timer available
Oct 09 09:31:56 localhost kernel: CPU topo: Max. logical packages:   4
Oct 09 09:31:56 localhost kernel: CPU topo: Max. logical dies:       4
Oct 09 09:31:56 localhost kernel: CPU topo: Max. dies per package:   1
Oct 09 09:31:56 localhost kernel: CPU topo: Max. threads per core:   1
Oct 09 09:31:56 localhost kernel: CPU topo: Num. cores per package:     1
Oct 09 09:31:56 localhost kernel: CPU topo: Num. threads per package:   1
Oct 09 09:31:56 localhost kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs
Oct 09 09:31:56 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct 09 09:31:56 localhost kernel: kvm-guest: KVM setup pv remote TLB flush
Oct 09 09:31:56 localhost kernel: kvm-guest: setup PV sched yield
Oct 09 09:31:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct 09 09:31:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct 09 09:31:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct 09 09:31:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct 09 09:31:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x7ffdb000-0x7fffffff]
Oct 09 09:31:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x80000000-0xafffffff]
Oct 09 09:31:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xb0000000-0xbfffffff]
Oct 09 09:31:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfed1bfff]
Oct 09 09:31:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfed1c000-0xfed1ffff]
Oct 09 09:31:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfed20000-0xfeffbfff]
Oct 09 09:31:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct 09 09:31:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct 09 09:31:56 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct 09 09:31:56 localhost kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices
Oct 09 09:31:56 localhost kernel: Booting paravirtualized kernel on KVM
Oct 09 09:31:56 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct 09 09:31:56 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1
Oct 09 09:31:56 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u524288
Oct 09 09:31:56 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u524288 alloc=1*2097152
Oct 09 09:31:56 localhost kernel: pcpu-alloc: [0] 0 1 2 3 
Oct 09 09:31:56 localhost kernel: kvm-guest: PV spinlocks enabled
Oct 09 09:31:56 localhost kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Oct 09 09:31:56 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 09 09:31:56 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct 09 09:31:56 localhost kernel: random: crng init done
Oct 09 09:31:56 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct 09 09:31:56 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct 09 09:31:56 localhost kernel: Fallback order for Node 0: 0 
Oct 09 09:31:56 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct 09 09:31:56 localhost kernel: Policy zone: Normal
Oct 09 09:31:56 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct 09 09:31:56 localhost kernel: software IO TLB: area num 4.
Oct 09 09:31:56 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Oct 09 09:31:56 localhost kernel: ftrace: allocating 49370 entries in 193 pages
Oct 09 09:31:56 localhost kernel: ftrace: allocated 193 pages with 3 groups
Oct 09 09:31:56 localhost kernel: Dynamic Preempt: voluntary
Oct 09 09:31:56 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Oct 09 09:31:56 localhost kernel: rcu:         RCU event tracing is enabled.
Oct 09 09:31:56 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=4.
Oct 09 09:31:56 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Oct 09 09:31:56 localhost kernel:         Rude variant of Tasks RCU enabled.
Oct 09 09:31:56 localhost kernel:         Tracing variant of Tasks RCU enabled.
Oct 09 09:31:56 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct 09 09:31:56 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Oct 09 09:31:56 localhost kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct 09 09:31:56 localhost kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct 09 09:31:56 localhost kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Oct 09 09:31:56 localhost kernel: NR_IRQS: 524544, nr_irqs: 456, preallocated irqs: 16
Oct 09 09:31:56 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct 09 09:31:56 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct 09 09:31:56 localhost kernel: Console: colour VGA+ 80x25
Oct 09 09:31:56 localhost kernel: printk: console [ttyS0] enabled
Oct 09 09:31:56 localhost kernel: ACPI: Core revision 20230331
Oct 09 09:31:56 localhost kernel: APIC: Switch to symmetric I/O mode setup
Oct 09 09:31:56 localhost kernel: x2apic enabled
Oct 09 09:31:56 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Oct 09 09:31:56 localhost kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask()
Oct 09 09:31:56 localhost kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself()
Oct 09 09:31:56 localhost kernel: kvm-guest: setup PV IPIs
Oct 09 09:31:56 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct 09 09:31:56 localhost kernel: Calibrating delay loop (skipped) preset value.. 4890.81 BogoMIPS (lpj=2445406)
Oct 09 09:31:56 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct 09 09:31:56 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct 09 09:31:56 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct 09 09:31:56 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct 09 09:31:56 localhost kernel: Spectre V2 : Mitigation: Retpolines
Oct 09 09:31:56 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct 09 09:31:56 localhost kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls
Oct 09 09:31:56 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct 09 09:31:56 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct 09 09:31:56 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct 09 09:31:56 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct 09 09:31:56 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct 09 09:31:56 localhost kernel: Transient Scheduler Attacks: Vulnerable: No microcode
Oct 09 09:31:56 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct 09 09:31:56 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct 09 09:31:56 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct 09 09:31:56 localhost kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Oct 09 09:31:56 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct 09 09:31:56 localhost kernel: x86/fpu: xstate_offset[9]:  832, xstate_sizes[9]:    8
Oct 09 09:31:56 localhost kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format.
Oct 09 09:31:56 localhost kernel: Freeing SMP alternatives memory: 40K
Oct 09 09:31:56 localhost kernel: pid_max: default: 32768 minimum: 301
Oct 09 09:31:56 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct 09 09:31:56 localhost kernel: landlock: Up and running.
Oct 09 09:31:56 localhost kernel: Yama: becoming mindful.
Oct 09 09:31:56 localhost kernel: SELinux:  Initializing.
Oct 09 09:31:56 localhost kernel: LSM support for eBPF active
Oct 09 09:31:56 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 09 09:31:56 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct 09 09:31:56 localhost kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1)
Oct 09 09:31:56 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct 09 09:31:56 localhost kernel: ... version:                0
Oct 09 09:31:56 localhost kernel: ... bit width:              48
Oct 09 09:31:56 localhost kernel: ... generic registers:      6
Oct 09 09:31:56 localhost kernel: ... value mask:             0000ffffffffffff
Oct 09 09:31:56 localhost kernel: ... max period:             00007fffffffffff
Oct 09 09:31:56 localhost kernel: ... fixed-purpose events:   0
Oct 09 09:31:56 localhost kernel: ... event mask:             000000000000003f
Oct 09 09:31:56 localhost kernel: signal: max sigframe size: 3376
Oct 09 09:31:56 localhost kernel: rcu: Hierarchical SRCU implementation.
Oct 09 09:31:56 localhost kernel: rcu:         Max phase no-delay instances is 400.
Oct 09 09:31:56 localhost kernel: smp: Bringing up secondary CPUs ...
Oct 09 09:31:56 localhost kernel: smpboot: x86: Booting SMP configuration:
Oct 09 09:31:56 localhost kernel: .... node  #0, CPUs:      #1 #2 #3
Oct 09 09:31:56 localhost kernel: smp: Brought up 1 node, 4 CPUs
Oct 09 09:31:56 localhost kernel: smpboot: Total of 4 processors activated (19563.24 BogoMIPS)
Oct 09 09:31:56 localhost kernel: node 0 deferred pages initialised in 16ms
Oct 09 09:31:56 localhost kernel: Memory: 7767908K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 615456K reserved, 0K cma-reserved)
Oct 09 09:31:56 localhost kernel: devtmpfs: initialized
Oct 09 09:31:56 localhost kernel: x86/mm: Memory block size: 128MB
Oct 09 09:31:56 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct 09 09:31:56 localhost kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Oct 09 09:31:56 localhost kernel: pinctrl core: initialized pinctrl subsystem
Oct 09 09:31:56 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct 09 09:31:56 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct 09 09:31:56 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct 09 09:31:56 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct 09 09:31:56 localhost kernel: audit: initializing netlink subsys (disabled)
Oct 09 09:31:56 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct 09 09:31:56 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct 09 09:31:56 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Oct 09 09:31:56 localhost kernel: audit: type=2000 audit(1760002315.537:1): state=initialized audit_enabled=0 res=1
Oct 09 09:31:56 localhost kernel: cpuidle: using governor menu
Oct 09 09:31:56 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct 09 09:31:56 localhost kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff]
Oct 09 09:31:56 localhost kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry
Oct 09 09:31:56 localhost kernel: PCI: Using configuration type 1 for base access
Oct 09 09:31:56 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct 09 09:31:56 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct 09 09:31:56 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct 09 09:31:56 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct 09 09:31:56 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct 09 09:31:56 localhost kernel: Demotion targets for Node 0: null
Oct 09 09:31:56 localhost kernel: cryptd: max_cpu_qlen set to 1000
Oct 09 09:31:56 localhost kernel: ACPI: Added _OSI(Module Device)
Oct 09 09:31:56 localhost kernel: ACPI: Added _OSI(Processor Device)
Oct 09 09:31:56 localhost kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct 09 09:31:56 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct 09 09:31:56 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct 09 09:31:56 localhost kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct 09 09:31:56 localhost kernel: ACPI: Interpreter enabled
Oct 09 09:31:56 localhost kernel: ACPI: PM: (supports S0 S5)
Oct 09 09:31:56 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Oct 09 09:31:56 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct 09 09:31:56 localhost kernel: PCI: Using E820 reservations for host bridge windows
Oct 09 09:31:56 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 3F
Oct 09 09:31:56 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct 09 09:31:56 localhost kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct 09 09:31:56 localhost kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR DPC]
Oct 09 09:31:56 localhost kernel: acpi PNP0A08:00: _OSC: OS now controls [SHPCHotplug PME AER PCIeCapability]
Oct 09 09:31:56 localhost kernel: PCI host bridge to bus 0000:00
Oct 09 09:31:56 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x280000000-0xa7fffffff window]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint
Oct 09 09:31:56 localhost kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct 09 09:31:56 localhost kernel: pci 0000:00:01.0: BAR 0 [mem 0xf9800000-0xf9ffffff pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:01.0: BAR 2 [mem 0xfc200000-0xfc203fff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:01.0: BAR 4 [mem 0xfea10000-0xfea10fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:01.0: ROM [mem 0xfea00000-0xfea0ffff pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfea11000-0xfea11fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.1: BAR 0 [mem 0xfea12000-0xfea12fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.2: BAR 0 [mem 0xfea13000-0xfea13fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.3: BAR 0 [mem 0xfea14000-0xfea14fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.4: BAR 0 [mem 0xfea15000-0xfea15fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.5: BAR 0 [mem 0xfea16000-0xfea16fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.6: BAR 0 [mem 0xfea17000-0xfea17fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.7: BAR 0 [mem 0xfea18000-0xfea18fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.0: BAR 0 [mem 0xfea19000-0xfea19fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.1: BAR 0 [mem 0xfea1a000-0xfea1afff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.2: BAR 0 [mem 0xfea1b000-0xfea1bfff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.3: BAR 0 [mem 0xfea1c000-0xfea1cfff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.4: BAR 0 [mem 0xfea1d000-0xfea1dfff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.5: BAR 0 [mem 0xfea1e000-0xfea1efff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.6: BAR 0 [mem 0xfea1f000-0xfea1ffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.7: BAR 0 [mem 0xfea20000-0xfea20fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port
Oct 09 09:31:56 localhost kernel: pci 0000:00:04.0: BAR 0 [mem 0xfea21000-0xfea21fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct 09 09:31:56 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint
Oct 09 09:31:56 localhost kernel: pci 0000:00:1f.0: quirk: [io  0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO
Oct 09 09:31:56 localhost kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint
Oct 09 09:31:56 localhost kernel: pci 0000:00:1f.2: BAR 4 [io  0xd040-0xd05f]
Oct 09 09:31:56 localhost kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfea22000-0xfea22fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint
Oct 09 09:31:56 localhost kernel: pci 0000:00:1f.3: BAR 4 [io  0x0700-0x073f]
Oct 09 09:31:56 localhost kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 PCIe to PCI/PCI-X bridge
Oct 09 09:31:56 localhost kernel: pci 0000:01:00.0: BAR 0 [mem 0xfc800000-0xfc8000ff 64bit]
Oct 09 09:31:56 localhost kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct 09 09:31:56 localhost kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Oct 09 09:31:56 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:02: extended config space not accessible
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [0] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [1] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [2] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [3] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [4] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [5] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [6] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [7] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [8] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [9] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [10] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [11] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [12] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [13] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [14] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [15] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [16] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [17] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [18] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [19] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [20] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [21] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [22] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [23] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [24] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [25] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [26] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [27] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [28] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [29] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [30] registered
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [31] registered
Oct 09 09:31:56 localhost kernel: pci 0000:02:01.0: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct 09 09:31:56 localhost kernel: pci 0000:02:01.0: BAR 4 [io  0xc000-0xc01f]
Oct 09 09:31:56 localhost kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [0-2] registered
Oct 09 09:31:56 localhost kernel: pci 0000:03:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Oct 09 09:31:56 localhost kernel: pci 0000:03:00.0: BAR 1 [mem 0xfe840000-0xfe840fff]
Oct 09 09:31:56 localhost kernel: pci 0000:03:00.0: BAR 4 [mem 0xfbe00000-0xfbe03fff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:03:00.0: ROM [mem 0xfe800000-0xfe83ffff pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [0-3] registered
Oct 09 09:31:56 localhost kernel: pci 0000:04:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint
Oct 09 09:31:56 localhost kernel: pci 0000:04:00.0: BAR 1 [mem 0xfe600000-0xfe600fff]
Oct 09 09:31:56 localhost kernel: pci 0000:04:00.0: BAR 4 [mem 0xfbc00000-0xfbc03fff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [0-4] registered
Oct 09 09:31:56 localhost kernel: pci 0000:05:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint
Oct 09 09:31:56 localhost kernel: pci 0000:05:00.0: BAR 4 [mem 0xfba00000-0xfba03fff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [0-5] registered
Oct 09 09:31:56 localhost kernel: pci 0000:06:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint
Oct 09 09:31:56 localhost kernel: pci 0000:06:00.0: BAR 4 [mem 0xfb800000-0xfb803fff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [0-6] registered
Oct 09 09:31:56 localhost kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint
Oct 09 09:31:56 localhost kernel: pci 0000:07:00.0: BAR 1 [mem 0xfe040000-0xfe040fff]
Oct 09 09:31:56 localhost kernel: pci 0000:07:00.0: BAR 4 [mem 0xfb600000-0xfb603fff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:07:00.0: ROM [mem 0xfe000000-0xfe03ffff pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [0-7] registered
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [0-8] registered
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [0-9] registered
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [0-10] registered
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [0-11] registered
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [0-12] registered
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [0-13] registered
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [0-14] registered
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [0-15] registered
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [0-16] registered
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct 09 09:31:56 localhost kernel: acpiphp: Slot [0-17] registered
Oct 09 09:31:56 localhost kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct 09 09:31:56 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct 09 09:31:56 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct 09 09:31:56 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct 09 09:31:56 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct 09 09:31:56 localhost kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10
Oct 09 09:31:56 localhost kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10
Oct 09 09:31:56 localhost kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11
Oct 09 09:31:56 localhost kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11
Oct 09 09:31:56 localhost kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16
Oct 09 09:31:56 localhost kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17
Oct 09 09:31:56 localhost kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18
Oct 09 09:31:56 localhost kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19
Oct 09 09:31:56 localhost kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20
Oct 09 09:31:56 localhost kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21
Oct 09 09:31:56 localhost kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22
Oct 09 09:31:56 localhost kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23
Oct 09 09:31:56 localhost kernel: iommu: Default domain type: Translated
Oct 09 09:31:56 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct 09 09:31:56 localhost kernel: SCSI subsystem initialized
Oct 09 09:31:56 localhost kernel: ACPI: bus type USB registered
Oct 09 09:31:56 localhost kernel: usbcore: registered new interface driver usbfs
Oct 09 09:31:56 localhost kernel: usbcore: registered new interface driver hub
Oct 09 09:31:56 localhost kernel: usbcore: registered new device driver usb
Oct 09 09:31:56 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Oct 09 09:31:56 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct 09 09:31:56 localhost kernel: PTP clock support registered
Oct 09 09:31:56 localhost kernel: EDAC MC: Ver: 3.0.0
Oct 09 09:31:56 localhost kernel: NetLabel: Initializing
Oct 09 09:31:56 localhost kernel: NetLabel:  domain hash size = 128
Oct 09 09:31:56 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct 09 09:31:56 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Oct 09 09:31:56 localhost kernel: PCI: Using ACPI for IRQ routing
Oct 09 09:31:56 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Oct 09 09:31:56 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Oct 09 09:31:56 localhost kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device
Oct 09 09:31:56 localhost kernel: pci 0000:00:01.0: vgaarb: bridge control possible
Oct 09 09:31:56 localhost kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct 09 09:31:56 localhost kernel: vgaarb: loaded
Oct 09 09:31:56 localhost kernel: clocksource: Switched to clocksource kvm-clock
Oct 09 09:31:56 localhost kernel: VFS: Disk quotas dquot_6.6.0
Oct 09 09:31:56 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct 09 09:31:56 localhost kernel: pnp: PnP ACPI init
Oct 09 09:31:56 localhost kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved
Oct 09 09:31:56 localhost kernel: pnp: PnP ACPI: found 5 devices
Oct 09 09:31:56 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct 09 09:31:56 localhost kernel: NET: Registered PF_INET protocol family
Oct 09 09:31:56 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct 09 09:31:56 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct 09 09:31:56 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct 09 09:31:56 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct 09 09:31:56 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct 09 09:31:56 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct 09 09:31:56 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct 09 09:31:56 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 09 09:31:56 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct 09 09:31:56 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct 09 09:31:56 localhost kernel: NET: Registered PF_XDP protocol family
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x0fff] to [bus 03] add_size 1000
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.2: bridge window [io  0x1000-0x0fff] to [bus 04] add_size 1000
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.3: bridge window [io  0x1000-0x0fff] to [bus 05] add_size 1000
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.4: bridge window [io  0x1000-0x0fff] to [bus 06] add_size 1000
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.5: bridge window [io  0x1000-0x0fff] to [bus 07] add_size 1000
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.6: bridge window [io  0x1000-0x0fff] to [bus 08] add_size 1000
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.7: bridge window [io  0x1000-0x0fff] to [bus 09] add_size 1000
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.0: bridge window [io  0x1000-0x0fff] to [bus 0a] add_size 1000
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.1: bridge window [io  0x1000-0x0fff] to [bus 0b] add_size 1000
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.2: bridge window [io  0x1000-0x0fff] to [bus 0c] add_size 1000
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.3: bridge window [io  0x1000-0x0fff] to [bus 0d] add_size 1000
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.4: bridge window [io  0x1000-0x0fff] to [bus 0e] add_size 1000
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.5: bridge window [io  0x1000-0x0fff] to [bus 0f] add_size 1000
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.6: bridge window [io  0x1000-0x0fff] to [bus 10] add_size 1000
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.7: bridge window [io  0x1000-0x0fff] to [bus 11] add_size 1000
Oct 09 09:31:56 localhost kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x0fff] to [bus 12] add_size 1000
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.1: bridge window [io  0x1000-0x1fff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.2: bridge window [io  0x2000-0x2fff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.3: bridge window [io  0x3000-0x3fff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.4: bridge window [io  0x4000-0x4fff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.5: bridge window [io  0x5000-0x5fff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.6: bridge window [io  0x6000-0x6fff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.7: bridge window [io  0x7000-0x7fff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.0: bridge window [io  0x8000-0x8fff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.1: bridge window [io  0x9000-0x9fff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.2: bridge window [io  0xa000-0xafff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.3: bridge window [io  0xb000-0xbfff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.4: bridge window [io  0xe000-0xefff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.5: bridge window [io  0xf000-0xffff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: can't assign; no space
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.6: bridge window [io  size 0x1000]: failed to assign
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: can't assign; no space
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.7: bridge window [io  size 0x1000]: failed to assign
Oct 09 09:31:56 localhost kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: can't assign; no space
Oct 09 09:31:56 localhost kernel: pci 0000:00:04.0: bridge window [io  size 0x1000]: failed to assign
Oct 09 09:31:56 localhost kernel: pci 0000:00:04.0: bridge window [io  0x1000-0x1fff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.7: bridge window [io  0x2000-0x2fff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.6: bridge window [io  0x3000-0x3fff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.5: bridge window [io  0x4000-0x4fff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.4: bridge window [io  0x5000-0x5fff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.3: bridge window [io  0x6000-0x6fff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.2: bridge window [io  0x7000-0x7fff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.1: bridge window [io  0x8000-0x8fff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.0: bridge window [io  0x9000-0x9fff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.7: bridge window [io  0xa000-0xafff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.6: bridge window [io  0xb000-0xbfff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.5: bridge window [io  0xe000-0xefff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.4: bridge window [io  0xf000-0xffff]: assigned
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: can't assign; no space
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.3: bridge window [io  size 0x1000]: failed to assign
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: can't assign; no space
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.2: bridge window [io  size 0x1000]: failed to assign
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: can't assign; no space
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.1: bridge window [io  size 0x1000]: failed to assign
Oct 09 09:31:56 localhost kernel: pci 0000:01:00.0: PCI bridge to [bus 02]
Oct 09 09:31:56 localhost kernel: pci 0000:01:00.0:   bridge window [io  0xc000-0xcfff]
Oct 09 09:31:56 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc600000-0xfc7fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:01:00.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.0:   bridge window [io  0xc000-0xcfff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc600000-0xfc9fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.0:   bridge window [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.1: PCI bridge to [bus 03]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfe800000-0xfe9fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.1:   bridge window [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.2: PCI bridge to [bus 04]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfe600000-0xfe7fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.2:   bridge window [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfe400000-0xfe5fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.3:   bridge window [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.4: PCI bridge to [bus 06]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.4:   bridge window [io  0xf000-0xffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfe200000-0xfe3fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.4:   bridge window [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.5: PCI bridge to [bus 07]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.5:   bridge window [io  0xe000-0xefff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfe000000-0xfe1fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.5:   bridge window [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.6: PCI bridge to [bus 08]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.6:   bridge window [io  0xb000-0xbfff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfde00000-0xfdffffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.6:   bridge window [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.7: PCI bridge to [bus 09]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.7:   bridge window [io  0xa000-0xafff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfdc00000-0xfddfffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:02.7:   bridge window [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.0: PCI bridge to [bus 0a]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.0:   bridge window [io  0x9000-0x9fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfda00000-0xfdbfffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.0:   bridge window [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.1: PCI bridge to [bus 0b]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.1:   bridge window [io  0x8000-0x8fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfd800000-0xfd9fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.1:   bridge window [mem 0xfae00000-0xfaffffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.2: PCI bridge to [bus 0c]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.2:   bridge window [io  0x7000-0x7fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfd600000-0xfd7fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.2:   bridge window [mem 0xfac00000-0xfadfffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.3: PCI bridge to [bus 0d]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.3:   bridge window [io  0x6000-0x6fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfd400000-0xfd5fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.3:   bridge window [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.4: PCI bridge to [bus 0e]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.4:   bridge window [io  0x5000-0x5fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfd200000-0xfd3fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.4:   bridge window [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.5: PCI bridge to [bus 0f]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.5:   bridge window [io  0x4000-0x4fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfd000000-0xfd1fffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.5:   bridge window [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.6: PCI bridge to [bus 10]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.6:   bridge window [io  0x3000-0x3fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfce00000-0xfcffffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.6:   bridge window [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.7: PCI bridge to [bus 11]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.7:   bridge window [io  0x2000-0x2fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfcc00000-0xfcdfffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:03.7:   bridge window [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci 0000:00:04.0: PCI bridge to [bus 12]
Oct 09 09:31:56 localhost kernel: pci 0000:00:04.0:   bridge window [io  0x1000-0x1fff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfca00000-0xfcbfffff]
Oct 09 09:31:56 localhost kernel: pci 0000:00:04.0:   bridge window [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:00: resource 9 [mem 0x280000000-0xa7fffffff window]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:01: resource 0 [io  0xc000-0xcfff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:01: resource 1 [mem 0xfc600000-0xfc9fffff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:01: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:02: resource 0 [io  0xc000-0xcfff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:02: resource 1 [mem 0xfc600000-0xfc7fffff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:02: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:03: resource 2 [mem 0xfbe00000-0xfbffffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:04: resource 2 [mem 0xfbc00000-0xfbdfffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:05: resource 2 [mem 0xfba00000-0xfbbfffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:06: resource 0 [io  0xf000-0xffff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:06: resource 2 [mem 0xfb800000-0xfb9fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:07: resource 0 [io  0xe000-0xefff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:07: resource 2 [mem 0xfb600000-0xfb7fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:08: resource 0 [io  0xb000-0xbfff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:08: resource 2 [mem 0xfb400000-0xfb5fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:09: resource 0 [io  0xa000-0xafff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:09: resource 2 [mem 0xfb200000-0xfb3fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:0a: resource 0 [io  0x9000-0x9fff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:0a: resource 1 [mem 0xfda00000-0xfdbfffff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:0a: resource 2 [mem 0xfb000000-0xfb1fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:0b: resource 0 [io  0x8000-0x8fff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:0b: resource 1 [mem 0xfd800000-0xfd9fffff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:0b: resource 2 [mem 0xfae00000-0xfaffffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:0c: resource 0 [io  0x7000-0x7fff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:0c: resource 1 [mem 0xfd600000-0xfd7fffff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:0c: resource 2 [mem 0xfac00000-0xfadfffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:0d: resource 0 [io  0x6000-0x6fff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:0d: resource 1 [mem 0xfd400000-0xfd5fffff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:0d: resource 2 [mem 0xfaa00000-0xfabfffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:0e: resource 0 [io  0x5000-0x5fff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:0e: resource 1 [mem 0xfd200000-0xfd3fffff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:0e: resource 2 [mem 0xfa800000-0xfa9fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:0f: resource 0 [io  0x4000-0x4fff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:0f: resource 1 [mem 0xfd000000-0xfd1fffff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:0f: resource 2 [mem 0xfa600000-0xfa7fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:10: resource 0 [io  0x3000-0x3fff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:10: resource 1 [mem 0xfce00000-0xfcffffff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:10: resource 2 [mem 0xfa400000-0xfa5fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:11: resource 0 [io  0x2000-0x2fff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:11: resource 1 [mem 0xfcc00000-0xfcdfffff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:11: resource 2 [mem 0xfa200000-0xfa3fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:12: resource 0 [io  0x1000-0x1fff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:12: resource 1 [mem 0xfca00000-0xfcbfffff]
Oct 09 09:31:56 localhost kernel: pci_bus 0000:12: resource 2 [mem 0xfa000000-0xfa1fffff 64bit pref]
Oct 09 09:31:56 localhost kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22
Oct 09 09:31:56 localhost kernel: PCI: CLS 0 bytes, default 64
Oct 09 09:31:56 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct 09 09:31:56 localhost kernel: software IO TLB: mapped [mem 0x000000006b000000-0x000000006f000000] (64MB)
Oct 09 09:31:56 localhost kernel: Trying to unpack rootfs image as initramfs...
Oct 09 09:31:56 localhost kernel: ACPI: bus type thunderbolt registered
Oct 09 09:31:56 localhost kernel: Initialise system trusted keyrings
Oct 09 09:31:56 localhost kernel: Key type blacklist registered
Oct 09 09:31:56 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct 09 09:31:56 localhost kernel: zbud: loaded
Oct 09 09:31:56 localhost kernel: integrity: Platform Keyring initialized
Oct 09 09:31:56 localhost kernel: integrity: Machine keyring initialized
Oct 09 09:31:56 localhost kernel: Freeing initrd memory: 86104K
Oct 09 09:31:56 localhost kernel: NET: Registered PF_ALG protocol family
Oct 09 09:31:56 localhost kernel: xor: automatically using best checksumming function   avx       
Oct 09 09:31:56 localhost kernel: Key type asymmetric registered
Oct 09 09:31:56 localhost kernel: Asymmetric key parser 'x509' registered
Oct 09 09:31:56 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct 09 09:31:56 localhost kernel: io scheduler mq-deadline registered
Oct 09 09:31:56 localhost kernel: io scheduler kyber registered
Oct 09 09:31:56 localhost kernel: io scheduler bfq registered
Oct 09 09:31:56 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31
Oct 09 09:31:56 localhost kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 33
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 33
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 34
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 34
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 35
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 35
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 36
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 36
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 37
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 37
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 38
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 38
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 39
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 39
Oct 09 09:31:56 localhost kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 40
Oct 09 09:31:56 localhost kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 40
Oct 09 09:31:56 localhost kernel: shpchp 0000:01:00.0: HPC vendor_id 1b36 device_id e ss_vid 0 ss_did 0
Oct 09 09:31:56 localhost kernel: shpchp 0000:01:00.0: pci_hp_register failed with error -16
Oct 09 09:31:56 localhost kernel: shpchp 0000:01:00.0: Slot initialization failed
Oct 09 09:31:56 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct 09 09:31:56 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct 09 09:31:56 localhost kernel: ACPI: button: Power Button [PWRF]
Oct 09 09:31:56 localhost kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21
Oct 09 09:31:56 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct 09 09:31:56 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct 09 09:31:56 localhost kernel: Non-volatile memory driver v1.3
Oct 09 09:31:56 localhost kernel: rdac: device handler registered
Oct 09 09:31:56 localhost kernel: hp_sw: device handler registered
Oct 09 09:31:56 localhost kernel: emc: device handler registered
Oct 09 09:31:56 localhost kernel: alua: device handler registered
Oct 09 09:31:56 localhost kernel: uhci_hcd 0000:02:01.0: UHCI Host Controller
Oct 09 09:31:56 localhost kernel: uhci_hcd 0000:02:01.0: new USB bus registered, assigned bus number 1
Oct 09 09:31:56 localhost kernel: uhci_hcd 0000:02:01.0: detected 2 ports
Oct 09 09:31:56 localhost kernel: uhci_hcd 0000:02:01.0: irq 22, io port 0x0000c000
Oct 09 09:31:56 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct 09 09:31:56 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct 09 09:31:56 localhost kernel: usb usb1: Product: UHCI Host Controller
Oct 09 09:31:56 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct 09 09:31:56 localhost kernel: usb usb1: SerialNumber: 0000:02:01.0
Oct 09 09:31:56 localhost kernel: hub 1-0:1.0: USB hub found
Oct 09 09:31:56 localhost kernel: hub 1-0:1.0: 2 ports detected
Oct 09 09:31:56 localhost kernel: usbcore: registered new interface driver usbserial_generic
Oct 09 09:31:56 localhost kernel: usbserial: USB Serial support registered for generic
Oct 09 09:31:56 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct 09 09:31:56 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct 09 09:31:56 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct 09 09:31:56 localhost kernel: mousedev: PS/2 mouse device common for all mice
Oct 09 09:31:56 localhost kernel: rtc_cmos 00:03: RTC can wake from S4
Oct 09 09:31:56 localhost kernel: rtc_cmos 00:03: registered as rtc0
Oct 09 09:31:56 localhost kernel: rtc_cmos 00:03: setting system clock to 2025-10-09T09:31:56 UTC (1760002316)
Oct 09 09:31:56 localhost kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram
Oct 09 09:31:56 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct 09 09:31:56 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Oct 09 09:31:56 localhost kernel: usbcore: registered new interface driver usbhid
Oct 09 09:31:56 localhost kernel: usbhid: USB HID core driver
Oct 09 09:31:56 localhost kernel: drop_monitor: Initializing network drop monitor service
Oct 09 09:31:56 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct 09 09:31:56 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct 09 09:31:56 localhost kernel: Initializing XFRM netlink socket
Oct 09 09:31:56 localhost kernel: NET: Registered PF_INET6 protocol family
Oct 09 09:31:56 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct 09 09:31:56 localhost kernel: Segment Routing with IPv6
Oct 09 09:31:56 localhost kernel: NET: Registered PF_PACKET protocol family
Oct 09 09:31:56 localhost kernel: mpls_gso: MPLS GSO support
Oct 09 09:31:56 localhost kernel: IPI shorthand broadcast: enabled
Oct 09 09:31:56 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Oct 09 09:31:56 localhost kernel: AES CTR mode by8 optimization enabled
Oct 09 09:31:56 localhost kernel: sched_clock: Marking stable (1017001537, 145248786)->(1400827785, -238577462)
Oct 09 09:31:56 localhost kernel: registered taskstats version 1
Oct 09 09:31:56 localhost kernel: Loading compiled-in X.509 certificates
Oct 09 09:31:56 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 09 09:31:56 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct 09 09:31:56 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct 09 09:31:56 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct 09 09:31:56 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct 09 09:31:56 localhost kernel: Demotion targets for Node 0: null
Oct 09 09:31:56 localhost kernel: page_owner is disabled
Oct 09 09:31:56 localhost kernel: Key type .fscrypt registered
Oct 09 09:31:56 localhost kernel: Key type fscrypt-provisioning registered
Oct 09 09:31:56 localhost kernel: Key type big_key registered
Oct 09 09:31:56 localhost kernel: Key type encrypted registered
Oct 09 09:31:56 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Oct 09 09:31:56 localhost kernel: Loading compiled-in module X.509 certificates
Oct 09 09:31:56 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct 09 09:31:56 localhost kernel: ima: Allocated hash algorithm: sha256
Oct 09 09:31:56 localhost kernel: ima: No architecture policies found
Oct 09 09:31:56 localhost kernel: evm: Initialising EVM extended attributes:
Oct 09 09:31:56 localhost kernel: evm: security.selinux
Oct 09 09:31:56 localhost kernel: evm: security.SMACK64 (disabled)
Oct 09 09:31:56 localhost kernel: evm: security.SMACK64EXEC (disabled)
Oct 09 09:31:56 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct 09 09:31:56 localhost kernel: evm: security.SMACK64MMAP (disabled)
Oct 09 09:31:56 localhost kernel: evm: security.apparmor (disabled)
Oct 09 09:31:56 localhost kernel: evm: security.ima
Oct 09 09:31:56 localhost kernel: evm: security.capability
Oct 09 09:31:56 localhost kernel: evm: HMAC attrs: 0x1
Oct 09 09:31:56 localhost kernel: Running certificate verification RSA selftest
Oct 09 09:31:56 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct 09 09:31:56 localhost kernel: Running certificate verification ECDSA selftest
Oct 09 09:31:56 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct 09 09:31:56 localhost kernel: clk: Disabling unused clocks
Oct 09 09:31:56 localhost kernel: Freeing unused decrypted memory: 2028K
Oct 09 09:31:56 localhost kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct 09 09:31:56 localhost kernel: Write protecting the kernel read-only data: 30720k
Oct 09 09:31:56 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct 09 09:31:56 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct 09 09:31:56 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct 09 09:31:56 localhost kernel: Run /init as init process
Oct 09 09:31:56 localhost kernel:   with arguments:
Oct 09 09:31:56 localhost kernel:     /init
Oct 09 09:31:56 localhost kernel:   with environment:
Oct 09 09:31:56 localhost kernel:     HOME=/
Oct 09 09:31:56 localhost kernel:     TERM=linux
Oct 09 09:31:56 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64
Oct 09 09:31:56 localhost systemd[1]: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 09 09:31:56 localhost systemd[1]: Detected virtualization kvm.
Oct 09 09:31:56 localhost systemd[1]: Detected architecture x86-64.
Oct 09 09:31:56 localhost systemd[1]: Running in initrd.
Oct 09 09:31:56 localhost systemd[1]: No hostname configured, using default hostname.
Oct 09 09:31:56 localhost systemd[1]: Hostname set to <localhost>.
Oct 09 09:31:56 localhost systemd[1]: Initializing machine ID from VM UUID.
Oct 09 09:31:56 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Oct 09 09:31:56 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 09 09:31:56 localhost systemd[1]: Reached target Local Encrypted Volumes.
Oct 09 09:31:56 localhost systemd[1]: Reached target Initrd /usr File System.
Oct 09 09:31:56 localhost systemd[1]: Reached target Local File Systems.
Oct 09 09:31:56 localhost systemd[1]: Reached target Path Units.
Oct 09 09:31:56 localhost systemd[1]: Reached target Slice Units.
Oct 09 09:31:56 localhost systemd[1]: Reached target Swaps.
Oct 09 09:31:56 localhost systemd[1]: Reached target Timer Units.
Oct 09 09:31:56 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 09 09:31:56 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Oct 09 09:31:56 localhost systemd[1]: Listening on Journal Socket.
Oct 09 09:31:56 localhost systemd[1]: Listening on udev Control Socket.
Oct 09 09:31:56 localhost systemd[1]: Listening on udev Kernel Socket.
Oct 09 09:31:56 localhost systemd[1]: Reached target Socket Units.
Oct 09 09:31:56 localhost systemd[1]: Starting Create List of Static Device Nodes...
Oct 09 09:31:56 localhost systemd[1]: Starting Journal Service...
Oct 09 09:31:56 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct 09 09:31:56 localhost systemd[1]: Starting Apply Kernel Variables...
Oct 09 09:31:56 localhost systemd[1]: Starting Create System Users...
Oct 09 09:31:56 localhost systemd[1]: Starting Setup Virtual Console...
Oct 09 09:31:56 localhost systemd[1]: Finished Create List of Static Device Nodes.
Oct 09 09:31:56 localhost systemd[1]: Finished Apply Kernel Variables.
Oct 09 09:31:56 localhost systemd[1]: Finished Create System Users.
Oct 09 09:31:56 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct 09 09:31:56 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct 09 09:31:56 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Oct 09 09:31:56 localhost kernel: usb 1-1: Manufacturer: QEMU
Oct 09 09:31:56 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:02.0:00.0:01.0-1
Oct 09 09:31:56 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 09 09:31:56 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.0/0000:01:00.0/0000:02:01.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct 09 09:31:56 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:01.0-1/input0
Oct 09 09:31:56 localhost systemd-journald[282]: Journal started
Oct 09 09:31:56 localhost systemd-journald[282]: Runtime Journal (/run/log/journal/ed71292475ec452aa842ae61b9b9ed0c) is 8.0M, max 153.6M, 145.6M free.
Oct 09 09:31:56 localhost systemd-sysusers[285]: Creating group 'users' with GID 100.
Oct 09 09:31:56 localhost systemd-sysusers[285]: Creating group 'dbus' with GID 81.
Oct 09 09:31:56 localhost systemd-sysusers[285]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct 09 09:31:56 localhost systemd[1]: Started Journal Service.
Oct 09 09:31:56 localhost systemd[1]: Starting Create Volatile Files and Directories...
Oct 09 09:31:57 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 09 09:31:57 localhost systemd[1]: Finished Create Volatile Files and Directories.
Oct 09 09:31:57 localhost systemd[1]: Finished Setup Virtual Console.
Oct 09 09:31:57 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct 09 09:31:57 localhost systemd[1]: Starting dracut cmdline hook...
Oct 09 09:31:57 localhost dracut-cmdline[300]: dracut-9 dracut-057-102.git20250818.el9
Oct 09 09:31:57 localhost dracut-cmdline[300]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct 09 09:31:57 localhost systemd[1]: Finished dracut cmdline hook.
Oct 09 09:31:57 localhost systemd[1]: Starting dracut pre-udev hook...
Oct 09 09:31:57 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct 09 09:31:57 localhost kernel: device-mapper: uevent: version 1.0.3
Oct 09 09:31:57 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct 09 09:31:57 localhost kernel: RPC: Registered named UNIX socket transport module.
Oct 09 09:31:57 localhost kernel: RPC: Registered udp transport module.
Oct 09 09:31:57 localhost kernel: RPC: Registered tcp transport module.
Oct 09 09:31:57 localhost kernel: RPC: Registered tcp-with-tls transport module.
Oct 09 09:31:57 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct 09 09:31:57 localhost rpc.statd[416]: Version 2.5.4 starting
Oct 09 09:31:57 localhost rpc.statd[416]: Initializing NSM state
Oct 09 09:31:57 localhost rpc.idmapd[421]: Setting log level to 0
Oct 09 09:31:57 localhost systemd[1]: Finished dracut pre-udev hook.
Oct 09 09:31:57 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 09 09:31:57 localhost systemd-udevd[434]: Using default interface naming scheme 'rhel-9.0'.
Oct 09 09:31:57 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 09 09:31:57 localhost systemd[1]: Starting dracut pre-trigger hook...
Oct 09 09:31:57 localhost systemd[1]: Finished dracut pre-trigger hook.
Oct 09 09:31:57 localhost systemd[1]: Starting Coldplug All udev Devices...
Oct 09 09:31:57 localhost systemd[1]: Created slice Slice /system/modprobe.
Oct 09 09:31:57 localhost systemd[1]: Starting Load Kernel Module configfs...
Oct 09 09:31:57 localhost systemd[1]: Finished Coldplug All udev Devices.
Oct 09 09:31:57 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 09 09:31:57 localhost systemd[1]: Finished Load Kernel Module configfs.
Oct 09 09:31:57 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 09 09:31:57 localhost systemd[1]: Reached target Network.
Oct 09 09:31:57 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct 09 09:31:57 localhost systemd[1]: Starting dracut initqueue hook...
Oct 09 09:31:57 localhost kernel: virtio_blk virtio2: 4/0/0 default/read/poll queues
Oct 09 09:31:57 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct 09 09:31:57 localhost kernel:  vda: vda1
Oct 09 09:31:57 localhost kernel: libata version 3.00 loaded.
Oct 09 09:31:57 localhost systemd-udevd[446]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:31:57 localhost systemd-udevd[477]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:31:57 localhost kernel: ahci 0000:00:1f.2: version 3.0
Oct 09 09:31:57 localhost kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16
Oct 09 09:31:57 localhost kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode
Oct 09 09:31:57 localhost kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f)
Oct 09 09:31:57 localhost kernel: ahci 0000:00:1f.2: flags: 64bit ncq only 
Oct 09 09:31:57 localhost kernel: scsi host0: ahci
Oct 09 09:31:57 localhost kernel: scsi host1: ahci
Oct 09 09:31:57 localhost kernel: scsi host2: ahci
Oct 09 09:31:57 localhost kernel: scsi host3: ahci
Oct 09 09:31:57 localhost systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 09 09:31:57 localhost systemd[1]: Reached target Initrd Root Device.
Oct 09 09:31:57 localhost kernel: scsi host4: ahci
Oct 09 09:31:57 localhost kernel: scsi host5: ahci
Oct 09 09:31:57 localhost kernel: ata1: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22100 irq 52 lpm-pol 0
Oct 09 09:31:57 localhost kernel: ata2: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22180 irq 52 lpm-pol 0
Oct 09 09:31:57 localhost kernel: ata3: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22200 irq 52 lpm-pol 0
Oct 09 09:31:57 localhost kernel: ata4: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22280 irq 52 lpm-pol 0
Oct 09 09:31:57 localhost kernel: ata5: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22300 irq 52 lpm-pol 0
Oct 09 09:31:57 localhost kernel: ata6: SATA max UDMA/133 abar m4096@0xfea22000 port 0xfea22380 irq 52 lpm-pol 0
Oct 09 09:31:57 localhost kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
Oct 09 09:31:57 localhost kernel: ata2: SATA link down (SStatus 0 SControl 300)
Oct 09 09:31:57 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct 09 09:31:57 localhost kernel: ata1.00: applying bridge limits
Oct 09 09:31:57 localhost kernel: ata1.00: configured for UDMA/100
Oct 09 09:31:57 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct 09 09:31:57 localhost systemd[1]: Mounting Kernel Configuration File System...
Oct 09 09:31:57 localhost kernel: ata5: SATA link down (SStatus 0 SControl 300)
Oct 09 09:31:57 localhost kernel: ata4: SATA link down (SStatus 0 SControl 300)
Oct 09 09:31:57 localhost kernel: ata3: SATA link down (SStatus 0 SControl 300)
Oct 09 09:31:57 localhost kernel: ata6: SATA link down (SStatus 0 SControl 300)
Oct 09 09:31:57 localhost systemd[1]: Mounted Kernel Configuration File System.
Oct 09 09:31:57 localhost systemd[1]: Reached target System Initialization.
Oct 09 09:31:57 localhost systemd[1]: Reached target Basic System.
Oct 09 09:31:57 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct 09 09:31:57 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct 09 09:31:57 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct 09 09:31:57 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Oct 09 09:31:57 localhost systemd[1]: Finished dracut initqueue hook.
Oct 09 09:31:57 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Oct 09 09:31:57 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Oct 09 09:31:57 localhost systemd[1]: Reached target Remote File Systems.
Oct 09 09:31:58 localhost systemd[1]: Starting dracut pre-mount hook...
Oct 09 09:31:58 localhost systemd[1]: Finished dracut pre-mount hook.
Oct 09 09:31:58 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct 09 09:31:58 localhost systemd-fsck[531]: /usr/sbin/fsck.xfs: XFS file system.
Oct 09 09:31:58 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct 09 09:31:58 localhost systemd[1]: Mounting /sysroot...
Oct 09 09:31:58 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct 09 09:31:58 localhost kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct 09 09:31:58 localhost kernel: XFS (vda1): Ending clean mount
Oct 09 09:31:58 localhost systemd[1]: Mounted /sysroot.
Oct 09 09:31:58 localhost systemd[1]: Reached target Initrd Root File System.
Oct 09 09:31:58 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct 09 09:31:58 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct 09 09:31:58 localhost systemd[1]: Reached target Initrd File Systems.
Oct 09 09:31:58 localhost systemd[1]: Reached target Initrd Default Target.
Oct 09 09:31:58 localhost systemd[1]: Starting dracut mount hook...
Oct 09 09:31:58 localhost systemd[1]: Finished dracut mount hook.
Oct 09 09:31:58 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct 09 09:31:58 localhost rpc.idmapd[421]: exiting on signal 15
Oct 09 09:31:58 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct 09 09:31:58 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct 09 09:31:58 localhost systemd[1]: Stopped target Network.
Oct 09 09:31:58 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Oct 09 09:31:58 localhost systemd[1]: Stopped target Timer Units.
Oct 09 09:31:58 localhost systemd[1]: dbus.socket: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Oct 09 09:31:58 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct 09 09:31:58 localhost systemd[1]: Stopped target Initrd Default Target.
Oct 09 09:31:58 localhost systemd[1]: Stopped target Basic System.
Oct 09 09:31:58 localhost systemd[1]: Stopped target Initrd Root Device.
Oct 09 09:31:58 localhost systemd[1]: Stopped target Initrd /usr File System.
Oct 09 09:31:58 localhost systemd[1]: Stopped target Path Units.
Oct 09 09:31:58 localhost systemd[1]: Stopped target Remote File Systems.
Oct 09 09:31:58 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Oct 09 09:31:58 localhost systemd[1]: Stopped target Slice Units.
Oct 09 09:31:58 localhost systemd[1]: Stopped target Socket Units.
Oct 09 09:31:58 localhost systemd[1]: Stopped target System Initialization.
Oct 09 09:31:58 localhost systemd[1]: Stopped target Local File Systems.
Oct 09 09:31:58 localhost systemd[1]: Stopped target Swaps.
Oct 09 09:31:58 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Stopped dracut mount hook.
Oct 09 09:31:58 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Stopped dracut pre-mount hook.
Oct 09 09:31:58 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Oct 09 09:31:58 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct 09 09:31:58 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Stopped dracut initqueue hook.
Oct 09 09:31:58 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Stopped Apply Kernel Variables.
Oct 09 09:31:58 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Oct 09 09:31:58 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Stopped Coldplug All udev Devices.
Oct 09 09:31:58 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Stopped dracut pre-trigger hook.
Oct 09 09:31:58 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct 09 09:31:58 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Stopped Setup Virtual Console.
Oct 09 09:31:58 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct 09 09:31:58 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct 09 09:31:58 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Closed udev Control Socket.
Oct 09 09:31:58 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Closed udev Kernel Socket.
Oct 09 09:31:58 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Stopped dracut pre-udev hook.
Oct 09 09:31:58 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Stopped dracut cmdline hook.
Oct 09 09:31:58 localhost systemd[1]: Starting Cleanup udev Database...
Oct 09 09:31:58 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct 09 09:31:58 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Oct 09 09:31:58 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Stopped Create System Users.
Oct 09 09:31:58 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct 09 09:31:58 localhost systemd[1]: Finished Cleanup udev Database.
Oct 09 09:31:58 localhost systemd[1]: Reached target Switch Root.
Oct 09 09:31:58 localhost systemd[1]: Starting Switch Root...
Oct 09 09:31:58 localhost systemd[1]: Switching root.
Oct 09 09:31:58 localhost systemd-journald[282]: Received SIGTERM from PID 1 (systemd).
Oct 09 09:31:58 localhost systemd-journald[282]: Journal stopped
Oct 09 09:31:59 compute-2 kernel: audit: type=1404 audit(1760002318.740:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct 09 09:31:59 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Oct 09 09:31:59 compute-2 kernel: SELinux:  policy capability open_perms=1
Oct 09 09:31:59 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Oct 09 09:31:59 compute-2 kernel: SELinux:  policy capability always_check_network=0
Oct 09 09:31:59 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 09 09:31:59 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 09 09:31:59 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 09 09:31:59 compute-2 kernel: audit: type=1403 audit(1760002318.849:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct 09 09:31:59 compute-2 systemd[1]: Successfully loaded SELinux policy in 111.606ms.
Oct 09 09:31:59 compute-2 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.659ms.
Oct 09 09:31:59 compute-2 systemd[1]: systemd 252-57.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct 09 09:31:59 compute-2 systemd[1]: Detected virtualization kvm.
Oct 09 09:31:59 compute-2 systemd[1]: Detected architecture x86-64.
Oct 09 09:31:59 compute-2 systemd[1]: Hostname set to <compute-2>.
Oct 09 09:31:59 compute-2 systemd-rc-local-generator[614]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:31:59 compute-2 systemd-sysv-generator[617]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:31:59 compute-2 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Oct 09 09:31:59 compute-2 systemd[1]: Stopped Switch Root.
Oct 09 09:31:59 compute-2 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct 09 09:31:59 compute-2 systemd[1]: Created slice Slice /system/getty.
Oct 09 09:31:59 compute-2 systemd[1]: Created slice Slice /system/serial-getty.
Oct 09 09:31:59 compute-2 systemd[1]: Created slice Slice /system/sshd-keygen.
Oct 09 09:31:59 compute-2 systemd[1]: Created slice User and Session Slice.
Oct 09 09:31:59 compute-2 systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Oct 09 09:31:59 compute-2 systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Oct 09 09:31:59 compute-2 systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct 09 09:31:59 compute-2 systemd[1]: Reached target Local Encrypted Volumes.
Oct 09 09:31:59 compute-2 systemd[1]: Stopped target Switch Root.
Oct 09 09:31:59 compute-2 systemd[1]: Stopped target Initrd File Systems.
Oct 09 09:31:59 compute-2 systemd[1]: Stopped target Initrd Root File System.
Oct 09 09:31:59 compute-2 systemd[1]: Reached target Local Integrity Protected Volumes.
Oct 09 09:31:59 compute-2 systemd[1]: Reached target Path Units.
Oct 09 09:31:59 compute-2 systemd[1]: Reached target rpc_pipefs.target.
Oct 09 09:31:59 compute-2 systemd[1]: Reached target Slice Units.
Oct 09 09:31:59 compute-2 systemd[1]: Reached target Local Verity Protected Volumes.
Oct 09 09:31:59 compute-2 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct 09 09:31:59 compute-2 systemd[1]: Listening on LVM2 poll daemon socket.
Oct 09 09:31:59 compute-2 systemd[1]: Listening on RPCbind Server Activation Socket.
Oct 09 09:31:59 compute-2 systemd[1]: Reached target RPC Port Mapper.
Oct 09 09:31:59 compute-2 systemd[1]: Listening on Process Core Dump Socket.
Oct 09 09:31:59 compute-2 systemd[1]: Listening on initctl Compatibility Named Pipe.
Oct 09 09:31:59 compute-2 systemd[1]: Listening on udev Control Socket.
Oct 09 09:31:59 compute-2 systemd[1]: Listening on udev Kernel Socket.
Oct 09 09:31:59 compute-2 systemd[1]: Mounting Huge Pages File System...
Oct 09 09:31:59 compute-2 systemd[1]: Mounting /dev/hugepages1G...
Oct 09 09:31:59 compute-2 systemd[1]: Mounting /dev/hugepages2M...
Oct 09 09:31:59 compute-2 systemd[1]: Mounting POSIX Message Queue File System...
Oct 09 09:31:59 compute-2 systemd[1]: Mounting Kernel Debug File System...
Oct 09 09:31:59 compute-2 systemd[1]: Mounting Kernel Trace File System...
Oct 09 09:31:59 compute-2 systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 09 09:31:59 compute-2 systemd[1]: Starting Create List of Static Device Nodes...
Oct 09 09:31:59 compute-2 systemd[1]: Load legacy module configuration was skipped because no trigger condition checks were met.
Oct 09 09:31:59 compute-2 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct 09 09:31:59 compute-2 systemd[1]: Starting Load Kernel Module configfs...
Oct 09 09:31:59 compute-2 systemd[1]: Starting Load Kernel Module drm...
Oct 09 09:31:59 compute-2 systemd[1]: Starting Load Kernel Module efi_pstore...
Oct 09 09:31:59 compute-2 systemd[1]: Starting Load Kernel Module fuse...
Oct 09 09:31:59 compute-2 systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct 09 09:31:59 compute-2 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Oct 09 09:31:59 compute-2 systemd[1]: Stopped File System Check on Root Device.
Oct 09 09:31:59 compute-2 systemd[1]: Stopped Journal Service.
Oct 09 09:31:59 compute-2 systemd[1]: Starting Journal Service...
Oct 09 09:31:59 compute-2 systemd[1]: Starting Load Kernel Modules...
Oct 09 09:31:59 compute-2 kernel: fuse: init (API version 7.37)
Oct 09 09:31:59 compute-2 systemd[1]: Starting Generate network units from Kernel command line...
Oct 09 09:31:59 compute-2 systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 09 09:31:59 compute-2 systemd[1]: Starting Remount Root and Kernel File Systems...
Oct 09 09:31:59 compute-2 systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct 09 09:31:59 compute-2 systemd[1]: Starting Coldplug All udev Devices...
Oct 09 09:31:59 compute-2 systemd-journald[663]: Journal started
Oct 09 09:31:59 compute-2 systemd-journald[663]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.6M, 145.6M free.
Oct 09 09:31:59 compute-2 systemd[1]: Queued start job for default target Multi-User System.
Oct 09 09:31:59 compute-2 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct 09 09:31:59 compute-2 kernel: ACPI: bus type drm_connector registered
Oct 09 09:31:59 compute-2 systemd[1]: Started Journal Service.
Oct 09 09:31:59 compute-2 systemd[1]: Mounted Huge Pages File System.
Oct 09 09:31:59 compute-2 systemd[1]: Mounted /dev/hugepages1G.
Oct 09 09:31:59 compute-2 systemd[1]: Mounted /dev/hugepages2M.
Oct 09 09:31:59 compute-2 systemd[1]: Mounted POSIX Message Queue File System.
Oct 09 09:31:59 compute-2 systemd[1]: Mounted Kernel Debug File System.
Oct 09 09:31:59 compute-2 systemd[1]: Mounted Kernel Trace File System.
Oct 09 09:31:59 compute-2 systemd[1]: Finished Create List of Static Device Nodes.
Oct 09 09:31:59 compute-2 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct 09 09:31:59 compute-2 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct 09 09:31:59 compute-2 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 09 09:31:59 compute-2 systemd[1]: Finished Load Kernel Module configfs.
Oct 09 09:31:59 compute-2 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct 09 09:31:59 compute-2 systemd[1]: Finished Load Kernel Module drm.
Oct 09 09:31:59 compute-2 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct 09 09:31:59 compute-2 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct 09 09:31:59 compute-2 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct 09 09:31:59 compute-2 systemd[1]: Finished Load Kernel Module fuse.
Oct 09 09:31:59 compute-2 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct 09 09:31:59 compute-2 systemd[1]: Finished Generate network units from Kernel command line.
Oct 09 09:31:59 compute-2 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct 09 09:31:59 compute-2 systemd[1]: Activating swap /swap...
Oct 09 09:31:59 compute-2 systemd[1]: Mounting FUSE Control File System...
Oct 09 09:31:59 compute-2 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 09 09:31:59 compute-2 systemd[1]: Rebuild Hardware Database was skipped because of an unmet condition check (ConditionNeedsUpdate=/etc).
Oct 09 09:31:59 compute-2 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct 09 09:31:59 compute-2 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct 09 09:31:59 compute-2 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct 09 09:31:59 compute-2 systemd[1]: Starting Load/Save OS Random Seed...
Oct 09 09:31:59 compute-2 systemd[1]: Create System Users was skipped because no trigger condition checks were met.
Oct 09 09:31:59 compute-2 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct 09 09:31:59 compute-2 systemd[1]: Activated swap /swap.
Oct 09 09:31:59 compute-2 systemd[1]: Mounted FUSE Control File System.
Oct 09 09:31:59 compute-2 systemd[1]: Reached target Swaps.
Oct 09 09:31:59 compute-2 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct 09 09:31:59 compute-2 systemd-journald[663]: Time spent on flushing to /var/log/journal/42833e1b511a402df82cb9cb2fc36491 is 8.834ms for 1155 entries.
Oct 09 09:31:59 compute-2 systemd-journald[663]: System Journal (/var/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 4.0G, 3.9G free.
Oct 09 09:31:59 compute-2 systemd-journald[663]: Received client request to flush runtime journal.
Oct 09 09:31:59 compute-2 kernel: Bridge firewalling registered
Oct 09 09:31:59 compute-2 systemd-modules-load[664]: Inserted module 'br_netfilter'
Oct 09 09:31:59 compute-2 systemd[1]: Finished Load/Save OS Random Seed.
Oct 09 09:31:59 compute-2 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct 09 09:31:59 compute-2 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct 09 09:31:59 compute-2 systemd-modules-load[664]: Inserted module 'nf_conntrack'
Oct 09 09:31:59 compute-2 systemd[1]: Finished Load Kernel Modules.
Oct 09 09:31:59 compute-2 systemd[1]: Starting Apply Kernel Variables...
Oct 09 09:31:59 compute-2 systemd[1]: Finished Coldplug All udev Devices.
Oct 09 09:31:59 compute-2 systemd[1]: Finished Apply Kernel Variables.
Oct 09 09:31:59 compute-2 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct 09 09:31:59 compute-2 systemd[1]: Reached target Preparation for Local File Systems.
Oct 09 09:31:59 compute-2 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 09 09:31:59 compute-2 systemd[1]: Reached target Local File Systems.
Oct 09 09:31:59 compute-2 systemd[1]: Starting Import network configuration from initramfs...
Oct 09 09:31:59 compute-2 systemd[1]: Rebuild Dynamic Linker Cache was skipped because no trigger condition checks were met.
Oct 09 09:31:59 compute-2 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct 09 09:31:59 compute-2 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct 09 09:31:59 compute-2 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct 09 09:31:59 compute-2 systemd[1]: Starting Automatic Boot Loader Update...
Oct 09 09:31:59 compute-2 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct 09 09:31:59 compute-2 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct 09 09:31:59 compute-2 bootctl[679]: Couldn't find EFI system partition, skipping.
Oct 09 09:31:59 compute-2 systemd[1]: Finished Automatic Boot Loader Update.
Oct 09 09:31:59 compute-2 systemd[1]: Finished Import network configuration from initramfs.
Oct 09 09:31:59 compute-2 systemd[1]: Starting Create Volatile Files and Directories...
Oct 09 09:31:59 compute-2 systemd-udevd[681]: Using default interface naming scheme 'rhel-9.0'.
Oct 09 09:31:59 compute-2 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct 09 09:31:59 compute-2 systemd[1]: Starting Load Kernel Module configfs...
Oct 09 09:31:59 compute-2 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct 09 09:31:59 compute-2 systemd[1]: Finished Load Kernel Module configfs.
Oct 09 09:31:59 compute-2 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct 09 09:31:59 compute-2 systemd[1]: Finished Create Volatile Files and Directories.
Oct 09 09:31:59 compute-2 systemd[1]: Starting Security Auditing Service...
Oct 09 09:31:59 compute-2 systemd[1]: Starting RPC Bind...
Oct 09 09:31:59 compute-2 systemd[1]: Rebuild Journal Catalog was skipped because of an unmet condition check (ConditionNeedsUpdate=/var).
Oct 09 09:31:59 compute-2 systemd[1]: Update is Completed was skipped because no trigger condition checks were met.
Oct 09 09:31:59 compute-2 systemd-udevd[700]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:31:59 compute-2 auditd[732]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct 09 09:31:59 compute-2 auditd[732]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct 09 09:31:59 compute-2 systemd[1]: Started RPC Bind.
Oct 09 09:31:59 compute-2 augenrules[738]: /sbin/augenrules: No change
Oct 09 09:31:59 compute-2 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct 09 09:31:59 compute-2 augenrules[756]: No rules
Oct 09 09:31:59 compute-2 augenrules[756]: enabled 1
Oct 09 09:31:59 compute-2 augenrules[756]: failure 1
Oct 09 09:31:59 compute-2 augenrules[756]: pid 732
Oct 09 09:31:59 compute-2 augenrules[756]: rate_limit 0
Oct 09 09:31:59 compute-2 augenrules[756]: backlog_limit 8192
Oct 09 09:31:59 compute-2 augenrules[756]: lost 0
Oct 09 09:31:59 compute-2 augenrules[756]: backlog 4
Oct 09 09:31:59 compute-2 augenrules[756]: backlog_wait_time 60000
Oct 09 09:31:59 compute-2 augenrules[756]: backlog_wait_time_actual 0
Oct 09 09:31:59 compute-2 augenrules[756]: enabled 1
Oct 09 09:31:59 compute-2 augenrules[756]: failure 1
Oct 09 09:31:59 compute-2 augenrules[756]: pid 732
Oct 09 09:31:59 compute-2 augenrules[756]: rate_limit 0
Oct 09 09:31:59 compute-2 augenrules[756]: backlog_limit 8192
Oct 09 09:31:59 compute-2 augenrules[756]: lost 0
Oct 09 09:31:59 compute-2 augenrules[756]: backlog 8
Oct 09 09:31:59 compute-2 augenrules[756]: backlog_wait_time 60000
Oct 09 09:31:59 compute-2 augenrules[756]: backlog_wait_time_actual 0
Oct 09 09:31:59 compute-2 augenrules[756]: enabled 1
Oct 09 09:31:59 compute-2 augenrules[756]: failure 1
Oct 09 09:31:59 compute-2 augenrules[756]: pid 732
Oct 09 09:31:59 compute-2 augenrules[756]: rate_limit 0
Oct 09 09:31:59 compute-2 augenrules[756]: backlog_limit 8192
Oct 09 09:31:59 compute-2 augenrules[756]: lost 0
Oct 09 09:31:59 compute-2 augenrules[756]: backlog 12
Oct 09 09:31:59 compute-2 augenrules[756]: backlog_wait_time 60000
Oct 09 09:31:59 compute-2 augenrules[756]: backlog_wait_time_actual 0
Oct 09 09:31:59 compute-2 kernel: lpc_ich 0000:00:1f.0: I/O space for GPIO uninitialized
Oct 09 09:31:59 compute-2 systemd[1]: Started Security Auditing Service.
Oct 09 09:31:59 compute-2 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct 09 09:31:59 compute-2 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct 09 09:31:59 compute-2 kernel: iTCO_vendor_support: vendor-support=0
Oct 09 09:31:59 compute-2 kernel: iTCO_wdt iTCO_wdt.1.auto: Found a ICH9 TCO device (Version=2, TCOBASE=0x0660)
Oct 09 09:31:59 compute-2 kernel: iTCO_wdt iTCO_wdt.1.auto: initialized. heartbeat=30 sec (nowayout=0)
Oct 09 09:31:59 compute-2 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt
Oct 09 09:31:59 compute-2 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct 09 09:31:59 compute-2 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct 09 09:31:59 compute-2 systemd-udevd[708]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:31:59 compute-2 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0
Oct 09 09:31:59 compute-2 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console
Oct 09 09:31:59 compute-2 kernel: Console: switching to colour dummy device 80x25
Oct 09 09:31:59 compute-2 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct 09 09:31:59 compute-2 kernel: [drm] features: -context_init
Oct 09 09:31:59 compute-2 kernel: [drm] number of scanouts: 1
Oct 09 09:31:59 compute-2 kernel: [drm] number of cap sets: 0
Oct 09 09:31:59 compute-2 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0
Oct 09 09:31:59 compute-2 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct 09 09:31:59 compute-2 kernel: Console: switching to colour frame buffer device 160x50
Oct 09 09:31:59 compute-2 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct 09 09:31:59 compute-2 kernel: kvm_amd: TSC scaling supported
Oct 09 09:31:59 compute-2 kernel: kvm_amd: Nested Virtualization enabled
Oct 09 09:31:59 compute-2 kernel: kvm_amd: Nested Paging enabled
Oct 09 09:31:59 compute-2 kernel: kvm_amd: LBR virtualization supported
Oct 09 09:31:59 compute-2 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported
Oct 09 09:31:59 compute-2 kernel: kvm_amd: Virtual GIF supported
Oct 09 09:32:00 compute-2 systemd[1]: Reached target System Initialization.
Oct 09 09:32:00 compute-2 systemd[1]: Started dnf makecache --timer.
Oct 09 09:32:00 compute-2 systemd[1]: Started Daily rotation of log files.
Oct 09 09:32:00 compute-2 systemd[1]: Started Run system activity accounting tool every 10 minutes.
Oct 09 09:32:00 compute-2 systemd[1]: Started Generate summary of yesterday's process accounting.
Oct 09 09:32:00 compute-2 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct 09 09:32:00 compute-2 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct 09 09:32:00 compute-2 systemd[1]: Reached target Timer Units.
Oct 09 09:32:00 compute-2 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct 09 09:32:00 compute-2 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct 09 09:32:00 compute-2 systemd[1]: Reached target Socket Units.
Oct 09 09:32:00 compute-2 systemd[1]: Starting D-Bus System Message Bus...
Oct 09 09:32:00 compute-2 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 09 09:32:00 compute-2 systemd[1]: Started D-Bus System Message Bus.
Oct 09 09:32:00 compute-2 systemd[1]: Reached target Basic System.
Oct 09 09:32:00 compute-2 dbus-broker-lau[791]: Ready
Oct 09 09:32:00 compute-2 systemd[1]: Starting NTP client/server...
Oct 09 09:32:00 compute-2 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct 09 09:32:00 compute-2 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct 09 09:32:00 compute-2 systemd[1]: Started irqbalance daemon.
Oct 09 09:32:00 compute-2 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct 09 09:32:00 compute-2 systemd[1]: Starting Create netns directory...
Oct 09 09:32:00 compute-2 systemd[1]: Starting Netfilter Tables...
Oct 09 09:32:00 compute-2 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 09 09:32:00 compute-2 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 09 09:32:00 compute-2 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 09 09:32:00 compute-2 systemd[1]: Reached target sshd-keygen.target.
Oct 09 09:32:00 compute-2 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct 09 09:32:00 compute-2 systemd[1]: Reached target User and Group Name Lookups.
Oct 09 09:32:00 compute-2 systemd[1]: Starting Resets System Activity Logs...
Oct 09 09:32:00 compute-2 systemd[1]: Starting User Login Management...
Oct 09 09:32:00 compute-2 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct 09 09:32:00 compute-2 systemd[1]: Finished Resets System Activity Logs.
Oct 09 09:32:00 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 09 09:32:00 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 09 09:32:00 compute-2 systemd[1]: Finished Create netns directory.
Oct 09 09:32:00 compute-2 systemd-logind[800]: New seat seat0.
Oct 09 09:32:00 compute-2 chronyd[807]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct 09 09:32:00 compute-2 systemd-logind[800]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 09 09:32:00 compute-2 systemd-logind[800]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 09 09:32:00 compute-2 chronyd[807]: Frequency -10.736 +/- 0.303 ppm read from /var/lib/chrony/drift
Oct 09 09:32:00 compute-2 chronyd[807]: Loaded seccomp filter (level 2)
Oct 09 09:32:00 compute-2 systemd[1]: Started User Login Management.
Oct 09 09:32:00 compute-2 systemd[1]: Started NTP client/server.
Oct 09 09:32:00 compute-2 systemd[1]: Finished Netfilter Tables.
Oct 09 09:32:00 compute-2 cloud-init[826]: Cloud-init v. 24.4-7.el9 running 'init-local' at Thu, 09 Oct 2025 09:32:00 +0000. Up 5.27 seconds.
Oct 09 09:32:00 compute-2 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct 09 09:32:00 compute-2 systemd[1]: Reached target Preparation for Network.
Oct 09 09:32:00 compute-2 systemd[1]: Starting Open vSwitch Database Unit...
Oct 09 09:32:00 compute-2 chown[828]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct 09 09:32:01 compute-2 ovs-ctl[833]: Starting ovsdb-server [  OK  ]
Oct 09 09:32:01 compute-2 ovs-vsctl[882]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct 09 09:32:01 compute-2 ovs-vsctl[892]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"c24becb7-a313-4586-a73e-1530a4367da3\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct 09 09:32:01 compute-2 ovs-ctl[833]: Configuring Open vSwitch system IDs [  OK  ]
Oct 09 09:32:01 compute-2 ovs-vsctl[898]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Oct 09 09:32:01 compute-2 ovs-ctl[833]: Enabling remote OVSDB managers [  OK  ]
Oct 09 09:32:01 compute-2 systemd[1]: Started Open vSwitch Database Unit.
Oct 09 09:32:01 compute-2 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct 09 09:32:01 compute-2 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct 09 09:32:01 compute-2 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct 09 09:32:01 compute-2 kernel: openvswitch: Open vSwitch switching datapath
Oct 09 09:32:01 compute-2 ovs-ctl[942]: Inserting openvswitch module [  OK  ]
Oct 09 09:32:01 compute-2 kernel: ovs-system: entered promiscuous mode
Oct 09 09:32:01 compute-2 kernel: Timeout policy base is empty
Oct 09 09:32:01 compute-2 kernel: vlan22: entered promiscuous mode
Oct 09 09:32:01 compute-2 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 09 09:32:01 compute-2 kernel: vlan21: entered promiscuous mode
Oct 09 09:32:01 compute-2 systemd-udevd[720]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:32:01 compute-2 kernel: vlan20: entered promiscuous mode
Oct 09 09:32:01 compute-2 kernel: vlan23: entered promiscuous mode
Oct 09 09:32:01 compute-2 ovs-ctl[911]: Starting ovs-vswitchd [  OK  ]
Oct 09 09:32:01 compute-2 ovs-vsctl[982]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-2
Oct 09 09:32:01 compute-2 ovs-ctl[911]: Enabling remote OVSDB managers [  OK  ]
Oct 09 09:32:01 compute-2 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct 09 09:32:01 compute-2 systemd[1]: Starting Open vSwitch...
Oct 09 09:32:01 compute-2 systemd[1]: Finished Open vSwitch.
Oct 09 09:32:01 compute-2 systemd[1]: Starting Network Manager...
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.4294] NetworkManager (version 1.54.1-1.el9) is starting... (boot:4e5c1d91-f962-4a1e-8648-9ceaaac75860)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.4297] Read config: /etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /etc/NetworkManager/conf.d/99-cloud-init.conf, /var/lib/NetworkManager/NetworkManager-intern.conf
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.4383] manager[0x55e6fe2ab040]: monitoring kernel firmware directory '/lib/firmware'.
Oct 09 09:32:01 compute-2 systemd[1]: Starting Hostname Service...
Oct 09 09:32:01 compute-2 systemd[1]: Started Hostname Service.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.4944] hostname: hostname: using hostnamed
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.4945] hostname: static hostname changed from (none) to "compute-2"
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.4949] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5035] manager[0x55e6fe2ab040]: rfkill: Wi-Fi hardware radio set enabled
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5035] manager[0x55e6fe2ab040]: rfkill: WWAN hardware radio set enabled
Oct 09 09:32:01 compute-2 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5073] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5089] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5090] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5091] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5091] manager: Networking is enabled by state file
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5096] settings: Loaded settings plugin: keyfile (internal)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5116] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5179] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5195] dhcp: init: Using DHCP client 'internal'
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5197] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5206] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 09:32:01 compute-2 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5218] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5224] device (lo): Activation: starting connection 'lo' (726b4f2c-1759-468e-9885-9a46134e929b)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5231] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5233] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5252] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/3)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5254] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5265] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/4)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5267] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5278] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/5)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5280] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5291] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/6)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5293] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5309] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/7)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5312] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5326] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5328] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:01 compute-2 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5334] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5336] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5341] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5343] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5349] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/11)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5352] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5364] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/12)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5367] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5372] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5374] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5379] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/14)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5381] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5386] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5388] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:01 compute-2 systemd[1]: Started Network Manager.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5394] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5399] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5409] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5410] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5412] device (eth0): carrier: link connected
Oct 09 09:32:01 compute-2 systemd[1]: Reached target Network.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5413] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5421] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5422] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5423] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5424] device (eth1): carrier: link connected
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5429] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5433] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5438] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5441] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5444] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5447] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5450] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5454] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5455] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5456] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5457] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5458] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5460] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5461] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5464] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5466] policy: auto-activating connection 'ci-private-network' (14fca061-f236-5fd4-a05f-8577fd3a8a98)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5467] policy: auto-activating connection 'vlan21-port' (084745e1-4043-483a-ab5e-f09ef7745634)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5467] policy: auto-activating connection 'vlan20-port' (13e69b35-1d02-48f3-8023-3685cdfc9a88)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5468] policy: auto-activating connection 'vlan23-port' (406aaad8-f358-4cbe-959e-1924644e4828)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5469] policy: auto-activating connection 'br-ex-port' (8ffce742-fb38-47d1-9133-f81b3e7c1d96)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5470] policy: auto-activating connection 'vlan22-port' (945df9e4-941f-40b7-8466-cc015b98ce41)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5470] policy: auto-activating connection 'br-ex-br' (e74d22ec-f198-4e0f-be1a-1f80d32c41d9)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5472] policy: auto-activating connection 'eth1-port' (ea9c5477-1c68-4281-8c51-5d80fd4aa6e4)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5472] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5476] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5477] device (eth1): Activation: starting connection 'ci-private-network' (14fca061-f236-5fd4-a05f-8577fd3a8a98)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5479] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (084745e1-4043-483a-ab5e-f09ef7745634)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5481] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (13e69b35-1d02-48f3-8023-3685cdfc9a88)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5482] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (406aaad8-f358-4cbe-959e-1924644e4828)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5484] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (8ffce742-fb38-47d1-9133-f81b3e7c1d96)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5485] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (945df9e4-941f-40b7-8466-cc015b98ce41)
Oct 09 09:32:01 compute-2 kernel: vlan22: left promiscuous mode
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5489] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (e74d22ec-f198-4e0f-be1a-1f80d32c41d9)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5490] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (ea9c5477-1c68-4281-8c51-5d80fd4aa6e4)
Oct 09 09:32:01 compute-2 systemd[1]: Starting Network Manager Wait Online...
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5491] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 09 09:32:01 compute-2 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5534] device (lo): Activation: successful, device activated.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5541] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5543] manager: NetworkManager state is now CONNECTING
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5543] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5556] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5558] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5560] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 09 09:32:01 compute-2 kernel: vlan23: left promiscuous mode
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5575] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5576] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5578] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5579] device (br-ex)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5591] device (br-ex)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5594] device (eth1)[Open vSwitch Port]: state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5602] device (eth1)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5602] device (vlan20)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5612] device (vlan20)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5613] device (vlan21)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5619] device (vlan21)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5619] device (vlan22)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5622] device (vlan22)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5623] device (vlan23)[Open vSwitch Port]: state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5626] device (vlan23)[Open vSwitch Port]: disconnecting for new activation request.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5627] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5628] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5629] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5630] device (eth1): state change: prepare -> deactivating (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5634] device (eth1): disconnecting for new activation request.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5635] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5643] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5647] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct 09 09:32:01 compute-2 kernel: vlan20: left promiscuous mode
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5669] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5687] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5698] device (br-ex)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:01 compute-2 systemd[1]: Started GSSAPI Proxy Daemon.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5716] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (8ffce742-fb38-47d1-9133-f81b3e7c1d96)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5718] device (eth1)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:01 compute-2 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct 09 09:32:01 compute-2 systemd[1]: Reached target NFS client services.
Oct 09 09:32:01 compute-2 systemd[1]: Reached target Preparation for Remote File Systems.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5736] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (ea9c5477-1c68-4281-8c51-5d80fd4aa6e4)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5747] device (vlan20)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:01 compute-2 systemd[1]: Reached target Remote File Systems.
Oct 09 09:32:01 compute-2 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5762] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (13e69b35-1d02-48f3-8023-3685cdfc9a88)
Oct 09 09:32:01 compute-2 kernel: vlan21: left promiscuous mode
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5807] dhcp4 (eth0): state changed new lease, address=192.168.26.193
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5820] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5837] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5847] policy: auto-activating connection 'vlan20-if' (6a31aae5-a24d-49f8-9056-2c0284cb05d9)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5850] policy: auto-activating connection 'vlan21-if' (e10720fe-4a3a-42fc-9eab-40565799ce5b)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5852] policy: auto-activating connection 'vlan23-if' (0ace4b83-92a3-4dfc-8fb0-2b73ed4fc795)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5856] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 kernel: virtio_net virtio5 eth1: left promiscuous mode
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5860] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5861] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5861] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5862] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5865] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 kernel: ovs-system: left promiscuous mode
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5871] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5871] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5873] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5877] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5878] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5878] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5887] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5894] device (vlan21)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5900] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (084745e1-4043-483a-ab5e-f09ef7745634)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5902] device (vlan22)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5907] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (945df9e4-941f-40b7-8466-cc015b98ce41)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5909] device (vlan23)[Open vSwitch Port]: state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5913] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (406aaad8-f358-4cbe-959e-1924644e4828)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5914] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5932] device (eth1): Activation: starting connection 'ci-private-network' (14fca061-f236-5fd4-a05f-8577fd3a8a98)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5935] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5940] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.5947] policy: auto-activating connection 'vlan22-if' (508ad833-78ea-45c2-a626-2e5f91da07cd)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6001] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6006] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6012] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (0ace4b83-92a3-4dfc-8fb0-2b73ed4fc795)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6013] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6023] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6029] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6038] policy: auto-activating connection 'vlan20-if' (6a31aae5-a24d-49f8-9056-2c0284cb05d9)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6044] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6048] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6051] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6052] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6055] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6061] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6063] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6064] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6066] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6070] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6072] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6073] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6076] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6081] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6085] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6090] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6101] policy: auto-activating connection 'vlan21-if' (e10720fe-4a3a-42fc-9eab-40565799ce5b)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6104] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6118] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6124] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (508ad833-78ea-45c2-a626-2e5f91da07cd)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6124] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6126] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6129] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6134] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6135] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6142] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:01 compute-2 kernel: ovs-system: entered promiscuous mode
Oct 09 09:32:01 compute-2 kernel: No such timeout policy "ovs_test_tp"
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6146] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6148] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (6a31aae5-a24d-49f8-9056-2c0284cb05d9)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6149] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6151] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6156] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6158] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6162] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6164] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6166] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (e10720fe-4a3a-42fc-9eab-40565799ce5b)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6167] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6168] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6169] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6173] policy: auto-activating connection 'br-ex-if' (e5a1a2e1-7841-4bc2-871e-a31627de4c3f)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6175] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6177] device (eth0): Activation: successful, device activated.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6182] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6195] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6196] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6196] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6198] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6200] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6202] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6203] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6209] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6211] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6213] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6217] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (e5a1a2e1-7841-4bc2-871e-a31627de4c3f)
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6218] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6220] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6222] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6224] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6226] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6229] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6231] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6257] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6261] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6266] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6267] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 kernel: vlan23: entered promiscuous mode
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6270] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6272] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6275] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6277] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6282] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6303] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6319] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6321] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6323] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6325] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6328] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6332] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6334] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct 09 09:32:01 compute-2 kernel: vlan22: entered promiscuous mode
Oct 09 09:32:01 compute-2 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct 09 09:32:01 compute-2 kernel: vlan20: entered promiscuous mode
Oct 09 09:32:01 compute-2 systemd-udevd[696]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6399] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6422] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6431] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6442] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6490] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6493] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6498] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6504] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 kernel: br-ex: entered promiscuous mode
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6512] device (eth1): Activation: successful, device activated.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6520] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6536] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6545] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6553] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6560] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6589] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 kernel: vlan21: entered promiscuous mode
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6613] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6628] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6633] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6641] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6648] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6664] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6665] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6670] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6697] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6706] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6723] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6727] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6736] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct 09 09:32:01 compute-2 NetworkManager[984]: <info>  [1760002321.6747] manager: startup complete
Oct 09 09:32:01 compute-2 systemd[1]: Finished Network Manager Wait Online.
Oct 09 09:32:01 compute-2 systemd[1]: Starting Cloud-init: Network Stage...
Oct 09 09:32:01 compute-2 systemd[1]: Starting Authorization Manager...
Oct 09 09:32:01 compute-2 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 09 09:32:01 compute-2 polkitd[1121]: Started polkitd version 0.117
Oct 09 09:32:01 compute-2 polkitd[1121]: Loading rules from directory /etc/polkit-1/rules.d
Oct 09 09:32:01 compute-2 polkitd[1121]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 09 09:32:01 compute-2 polkitd[1121]: Finished loading, compiling and executing 3 rules
Oct 09 09:32:01 compute-2 systemd[1]: Started Authorization Manager.
Oct 09 09:32:01 compute-2 polkitd[1121]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Oct 09 09:32:01 compute-2 cloud-init[1211]: Cloud-init v. 24.4-7.el9 running 'init' at Thu, 09 Oct 2025 09:32:01 +0000. Up 6.41 seconds.
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: +++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: |   Device   |   Up  |     Address     |      Mask     | Scope  |     Hw-Address    |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: |   br-ex    |  True | 192.168.122.102 | 255.255.255.0 | global | fa:16:3e:27:92:90 |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: |    eth0    |  True |  192.168.26.193 | 255.255.255.0 | global | fa:16:3e:49:30:79 |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: |    eth1    |  True |        .        |       .       |   .    | fa:16:3e:27:92:90 |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: |     lo     |  True |    127.0.0.1    |   255.0.0.0   |  host  |         .         |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: |     lo     |  True |     ::1/128     |       .       |  host  |         .         |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: | ovs-system | False |        .        |       .       |   .    | 12:64:9c:17:a0:eb |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: |   vlan20   |  True |   172.17.0.102  | 255.255.255.0 | global | 4a:8d:e2:12:1b:28 |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: |   vlan21   |  True |   172.18.0.102  | 255.255.255.0 | global | c6:a3:7e:f5:98:20 |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: |   vlan22   |  True |   172.19.0.102  | 255.255.255.0 | global | 12:06:a2:8f:c3:3a |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: |   vlan23   |  True |   172.20.0.102  | 255.255.255.0 | global | 92:d6:e4:56:b5:24 |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: +------------+-------+-----------------+---------------+--------+-------------------+
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: ++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: | Route |   Destination   |   Gateway    |     Genmask     | Interface | Flags |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: |   0   |     0.0.0.0     | 192.168.26.1 |     0.0.0.0     |    eth0   |   UG  |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: |   1   | 169.254.169.254 | 192.168.26.2 | 255.255.255.255 |    eth0   |  UGH  |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: |   2   |    172.17.0.0   |   0.0.0.0    |  255.255.255.0  |   vlan20  |   U   |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: |   3   |    172.18.0.0   |   0.0.0.0    |  255.255.255.0  |   vlan21  |   U   |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: |   4   |    172.19.0.0   |   0.0.0.0    |  255.255.255.0  |   vlan22  |   U   |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: |   5   |    172.20.0.0   |   0.0.0.0    |  255.255.255.0  |   vlan23  |   U   |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: |   6   |   192.168.26.0  |   0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: |   7   |  192.168.122.0  |   0.0.0.0    |  255.255.255.0  |   br-ex   |   U   |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: +-------+-----------------+--------------+-----------------+-----------+-------+
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: |   2   |  multicast  |    ::   |    eth1   |   U   |
Oct 09 09:32:01 compute-2 cloud-init[1211]: ci-info: +-------+-------------+---------+-----------+-------+
Oct 09 09:32:02 compute-2 systemd[1]: Finished Cloud-init: Network Stage.
Oct 09 09:32:02 compute-2 systemd[1]: Reached target Cloud-config availability.
Oct 09 09:32:02 compute-2 systemd[1]: Reached target Network is Online.
Oct 09 09:32:02 compute-2 systemd[1]: Starting Cloud-init: Config Stage...
Oct 09 09:32:02 compute-2 systemd[1]: Starting EDPM Container Shutdown...
Oct 09 09:32:02 compute-2 systemd[1]: Starting Notify NFS peers of a restart...
Oct 09 09:32:02 compute-2 systemd[1]: Starting System Logging Service...
Oct 09 09:32:02 compute-2 sm-notify[1244]: Version 2.5.4 starting
Oct 09 09:32:02 compute-2 systemd[1]: Starting OpenSSH server daemon...
Oct 09 09:32:02 compute-2 systemd[1]: Starting Permit User Sessions...
Oct 09 09:32:02 compute-2 systemd[1]: Finished EDPM Container Shutdown.
Oct 09 09:32:02 compute-2 systemd[1]: Started Notify NFS peers of a restart.
Oct 09 09:32:02 compute-2 systemd[1]: Finished Permit User Sessions.
Oct 09 09:32:02 compute-2 sshd[1246]: Server listening on 0.0.0.0 port 22.
Oct 09 09:32:02 compute-2 sshd[1246]: Server listening on :: port 22.
Oct 09 09:32:02 compute-2 systemd[1]: Started Command Scheduler.
Oct 09 09:32:02 compute-2 systemd[1]: Started Getty on tty1.
Oct 09 09:32:02 compute-2 crond[1248]: (CRON) STARTUP (1.5.7)
Oct 09 09:32:02 compute-2 crond[1248]: (CRON) INFO (Syslog will be used instead of sendmail.)
Oct 09 09:32:02 compute-2 crond[1248]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 82% if used.)
Oct 09 09:32:02 compute-2 crond[1248]: (CRON) INFO (running with inotify support)
Oct 09 09:32:02 compute-2 systemd[1]: Started Serial Getty on ttyS0.
Oct 09 09:32:02 compute-2 systemd[1]: Reached target Login Prompts.
Oct 09 09:32:02 compute-2 systemd[1]: Started OpenSSH server daemon.
Oct 09 09:32:02 compute-2 systemd[1]: Started System Logging Service.
Oct 09 09:32:02 compute-2 rsyslogd[1245]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1245" x-info="https://www.rsyslog.com"] start
Oct 09 09:32:02 compute-2 systemd[1]: Reached target Multi-User System.
Oct 09 09:32:02 compute-2 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct 09 09:32:02 compute-2 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct 09 09:32:02 compute-2 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct 09 09:32:02 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 09:32:02 compute-2 cloud-init[1257]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Thu, 09 Oct 2025 09:32:02 +0000. Up 6.84 seconds.
Oct 09 09:32:02 compute-2 systemd[1]: Finished Cloud-init: Config Stage.
Oct 09 09:32:02 compute-2 systemd[1]: Starting Cloud-init: Final Stage...
Oct 09 09:32:02 compute-2 cloud-init[1261]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Thu, 09 Oct 2025 09:32:02 +0000. Up 7.14 seconds.
Oct 09 09:32:02 compute-2 cloud-init[1261]: Cloud-init v. 24.4-7.el9 finished at Thu, 09 Oct 2025 09:32:02 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 7.18 seconds
Oct 09 09:32:02 compute-2 systemd[1]: Finished Cloud-init: Final Stage.
Oct 09 09:32:02 compute-2 systemd[1]: Reached target Cloud-init target.
Oct 09 09:32:02 compute-2 systemd[1]: Startup finished in 1.267s (kernel) + 1.958s (initrd) + 4.000s (userspace) = 7.227s.
Oct 09 09:32:10 compute-2 irqbalance[796]: Cannot change IRQ 45 affinity: Operation not permitted
Oct 09 09:32:10 compute-2 irqbalance[796]: IRQ 45 affinity is now unmanaged
Oct 09 09:32:10 compute-2 irqbalance[796]: Cannot change IRQ 43 affinity: Operation not permitted
Oct 09 09:32:10 compute-2 irqbalance[796]: IRQ 43 affinity is now unmanaged
Oct 09 09:32:10 compute-2 irqbalance[796]: Cannot change IRQ 42 affinity: Operation not permitted
Oct 09 09:32:10 compute-2 irqbalance[796]: IRQ 42 affinity is now unmanaged
Oct 09 09:32:11 compute-2 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 09 09:32:31 compute-2 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 09 09:32:50 compute-2 sshd-session[1266]: Accepted publickey for zuul from 192.168.122.30 port 41004 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:32:50 compute-2 systemd[1]: Created slice User Slice of UID 1000.
Oct 09 09:32:50 compute-2 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 09 09:32:50 compute-2 systemd-logind[800]: New session 1 of user zuul.
Oct 09 09:32:51 compute-2 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 09 09:32:51 compute-2 systemd[1]: Starting User Manager for UID 1000...
Oct 09 09:32:51 compute-2 systemd[1270]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:32:51 compute-2 systemd[1270]: Queued start job for default target Main User Target.
Oct 09 09:32:51 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 09:32:51 compute-2 systemd[1270]: Created slice User Application Slice.
Oct 09 09:32:51 compute-2 systemd[1270]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 09 09:32:51 compute-2 systemd[1270]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 09:32:51 compute-2 systemd[1270]: Reached target Paths.
Oct 09 09:32:51 compute-2 systemd[1270]: Reached target Timers.
Oct 09 09:32:51 compute-2 systemd[1270]: Starting D-Bus User Message Bus Socket...
Oct 09 09:32:51 compute-2 systemd[1270]: Starting Create User's Volatile Files and Directories...
Oct 09 09:32:51 compute-2 systemd[1270]: Listening on D-Bus User Message Bus Socket.
Oct 09 09:32:51 compute-2 systemd[1270]: Finished Create User's Volatile Files and Directories.
Oct 09 09:32:51 compute-2 systemd[1270]: Reached target Sockets.
Oct 09 09:32:51 compute-2 systemd[1270]: Reached target Basic System.
Oct 09 09:32:51 compute-2 systemd[1]: Started User Manager for UID 1000.
Oct 09 09:32:51 compute-2 systemd[1270]: Reached target Main User Target.
Oct 09 09:32:51 compute-2 systemd[1270]: Startup finished in 87ms.
Oct 09 09:32:51 compute-2 systemd[1]: Started Session 1 of User zuul.
Oct 09 09:32:51 compute-2 sshd-session[1266]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:32:51 compute-2 sudo[1312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvbelupiyqmnqikbhmgaqmabjuwxbrat ; cat /proc/sys/kernel/random/boot_id'
Oct 09 09:32:51 compute-2 sudo[1312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:32:51 compute-2 sudo[1312]: pam_unix(sudo:session): session closed for user root
Oct 09 09:32:51 compute-2 sudo[1341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljbizxuizdymabakovfrkmriwtqokvpd ; whoami'
Oct 09 09:32:51 compute-2 sudo[1341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:32:51 compute-2 sudo[1341]: pam_unix(sudo:session): session closed for user root
Oct 09 09:32:51 compute-2 sudo[1493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpyxcdfvjbeobvvxjzgqfgkkryzhfnml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002371.6813812-337-83018652614077/AnsiballZ_file.py'
Oct 09 09:32:51 compute-2 sudo[1493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:32:51 compute-2 python3.9[1495]: ansible-ansible.builtin.file Invoked with path=/var/lib/openstack/reboot_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:32:51 compute-2 sudo[1493]: pam_unix(sudo:session): session closed for user root
Oct 09 09:32:51 compute-2 sshd-session[1285]: Connection closed by 192.168.122.30 port 41004
Oct 09 09:32:51 compute-2 sshd-session[1266]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:32:51 compute-2 systemd[1]: session-1.scope: Deactivated successfully.
Oct 09 09:32:51 compute-2 systemd-logind[800]: Session 1 logged out. Waiting for processes to exit.
Oct 09 09:32:51 compute-2 systemd-logind[800]: Removed session 1.
Oct 09 09:32:58 compute-2 sshd-session[1520]: Accepted publickey for zuul from 192.168.26.46 port 34468 ssh2: RSA SHA256:v7VHW1cDs9OvF+ufrkmS713ZRHIh0wMGaPvplRsZw/E
Oct 09 09:32:58 compute-2 systemd-logind[800]: New session 3 of user zuul.
Oct 09 09:32:58 compute-2 systemd[1]: Started Session 3 of User zuul.
Oct 09 09:32:58 compute-2 sshd-session[1520]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:32:58 compute-2 sudo[1596]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oddgqhhttdhyzliqipmxnxiiehxvascy ; /usr/bin/python3'
Oct 09 09:32:58 compute-2 sudo[1596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:32:58 compute-2 useradd[1600]: new group: name=ceph-admin, GID=42478
Oct 09 09:32:58 compute-2 useradd[1600]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Oct 09 09:32:58 compute-2 sudo[1596]: pam_unix(sudo:session): session closed for user root
Oct 09 09:32:58 compute-2 sudo[1682]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryqhneckpjiirbedftyisnqkcgfmonkq ; /usr/bin/python3'
Oct 09 09:32:58 compute-2 sudo[1682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:32:59 compute-2 sudo[1682]: pam_unix(sudo:session): session closed for user root
Oct 09 09:32:59 compute-2 sudo[1755]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijymmcszveecvubskbmjedampwzrzmvk ; /usr/bin/python3'
Oct 09 09:32:59 compute-2 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:32:59 compute-2 sudo[1755]: pam_unix(sudo:session): session closed for user root
Oct 09 09:32:59 compute-2 sudo[1805]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yepubjgngtdccfcidxkwtbqcwzfnnuws ; /usr/bin/python3'
Oct 09 09:32:59 compute-2 sudo[1805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:32:59 compute-2 sudo[1805]: pam_unix(sudo:session): session closed for user root
Oct 09 09:32:59 compute-2 sudo[1831]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdcuefhivrvapkvutkdfmsgzrwojjebs ; /usr/bin/python3'
Oct 09 09:32:59 compute-2 sudo[1831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:00 compute-2 sudo[1831]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:00 compute-2 sudo[1857]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjezheyeftpxvbrpduziezocwqyvtmbi ; /usr/bin/python3'
Oct 09 09:33:00 compute-2 sudo[1857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:00 compute-2 sudo[1857]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:00 compute-2 sudo[1883]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umseskvkngtvuwxgwbzyghtngcynsazg ; /usr/bin/python3'
Oct 09 09:33:00 compute-2 sudo[1883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:00 compute-2 sudo[1883]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:01 compute-2 sudo[1961]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iytntriuurtscavvhbdkodgtdimhlrdu ; /usr/bin/python3'
Oct 09 09:33:01 compute-2 sudo[1961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:01 compute-2 sudo[1961]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:01 compute-2 sudo[2034]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scdxsjcuftlfucgctsewjpfeupnorcgo ; /usr/bin/python3'
Oct 09 09:33:01 compute-2 sudo[2034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:01 compute-2 sudo[2034]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:01 compute-2 sudo[2136]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjzbwcvfwwikrzsxufbsatojnasuadyq ; /usr/bin/python3'
Oct 09 09:33:01 compute-2 sudo[2136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:01 compute-2 sudo[2136]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:02 compute-2 sudo[2209]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwsvzmtkkwbgayaxkunvyttphcypphzm ; /usr/bin/python3'
Oct 09 09:33:02 compute-2 sudo[2209]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:02 compute-2 sudo[2209]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:02 compute-2 sudo[2259]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmdypubpfypqcajobncqlajicsejiqxn ; /usr/bin/python3'
Oct 09 09:33:02 compute-2 sudo[2259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:02 compute-2 python3[2261]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:33:03 compute-2 sudo[2259]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:04 compute-2 sudo[2350]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofzzpqitfruwjqdaimxcmfvmseczlpax ; /usr/bin/python3'
Oct 09 09:33:04 compute-2 sudo[2350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:04 compute-2 python3[2352]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct 09 09:33:05 compute-2 sudo[2350]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:05 compute-2 sudo[2377]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcwbcxrsxsonbsyfneapltqtiozzjtvh ; /usr/bin/python3'
Oct 09 09:33:05 compute-2 sudo[2377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:05 compute-2 python3[2379]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:33:05 compute-2 sudo[2377]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:05 compute-2 sudo[2403]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onjwdebinodulrhjbiayqhfkfunmguta ; /usr/bin/python3'
Oct 09 09:33:05 compute-2 sudo[2403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:05 compute-2 python3[2405]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                         losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                         lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:33:05 compute-2 kernel: loop: module loaded
Oct 09 09:33:05 compute-2 kernel: loop3: detected capacity change from 0 to 41943040
Oct 09 09:33:05 compute-2 sudo[2403]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:06 compute-2 sudo[2438]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggslcqthncqbutfhmcffzjsmacfdhjsb ; /usr/bin/python3'
Oct 09 09:33:06 compute-2 sudo[2438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:06 compute-2 python3[2440]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                         vgcreate ceph_vg0 /dev/loop3
                                         lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                         lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:33:06 compute-2 lvm[2443]: PV /dev/loop3 not used.
Oct 09 09:33:06 compute-2 lvm[2445]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 09:33:06 compute-2 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct 09 09:33:06 compute-2 lvm[2455]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 09:33:06 compute-2 lvm[2455]: VG ceph_vg0 finished
Oct 09 09:33:06 compute-2 lvm[2452]:   1 logical volume(s) in volume group "ceph_vg0" now active
Oct 09 09:33:06 compute-2 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct 09 09:33:06 compute-2 sudo[2438]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:06 compute-2 sudo[2531]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raubzoqtowhyifrvlnypjovfqrowflov ; /usr/bin/python3'
Oct 09 09:33:06 compute-2 sudo[2531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:06 compute-2 python3[2533]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct 09 09:33:06 compute-2 sudo[2531]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:06 compute-2 sudo[2604]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jyinbhnedkcqrglxivswqzoyznhakhcw ; /usr/bin/python3'
Oct 09 09:33:06 compute-2 sudo[2604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:06 compute-2 python3[2606]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760002386.7986755-33835-120594403229080/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:33:06 compute-2 sudo[2604]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:07 compute-2 sudo[2654]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exnwqvmasmsqqzupiywaymetwnomwkkb ; /usr/bin/python3'
Oct 09 09:33:07 compute-2 sudo[2654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:33:07 compute-2 python3[2656]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:33:07 compute-2 systemd[1]: Reloading.
Oct 09 09:33:07 compute-2 systemd-rc-local-generator[2678]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:33:07 compute-2 systemd-sysv-generator[2682]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:33:07 compute-2 systemd[1]: Starting Ceph OSD losetup...
Oct 09 09:33:07 compute-2 bash[2695]: /dev/loop3: [64513]:4194935 (/var/lib/ceph-osd-0.img)
Oct 09 09:33:07 compute-2 systemd[1]: Finished Ceph OSD losetup.
Oct 09 09:33:07 compute-2 lvm[2696]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 09:33:07 compute-2 lvm[2696]: VG ceph_vg0 finished
Oct 09 09:33:07 compute-2 sudo[2654]: pam_unix(sudo:session): session closed for user root
Oct 09 09:33:09 compute-2 python3[2720]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:34:18 compute-2 sshd-session[2764]: Accepted publickey for ceph-admin from 192.168.122.100 port 42542 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:18 compute-2 systemd[1]: Created slice User Slice of UID 42477.
Oct 09 09:34:18 compute-2 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 09 09:34:18 compute-2 systemd-logind[800]: New session 4 of user ceph-admin.
Oct 09 09:34:18 compute-2 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 09 09:34:18 compute-2 systemd[1]: Starting User Manager for UID 42477...
Oct 09 09:34:18 compute-2 systemd[2768]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:18 compute-2 systemd[2768]: Queued start job for default target Main User Target.
Oct 09 09:34:18 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 09:34:18 compute-2 systemd[2768]: Created slice User Application Slice.
Oct 09 09:34:18 compute-2 systemd[2768]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 09 09:34:18 compute-2 systemd[2768]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 09:34:18 compute-2 systemd[2768]: Reached target Paths.
Oct 09 09:34:18 compute-2 systemd[2768]: Reached target Timers.
Oct 09 09:34:18 compute-2 systemd[2768]: Starting D-Bus User Message Bus Socket...
Oct 09 09:34:18 compute-2 systemd[2768]: Starting Create User's Volatile Files and Directories...
Oct 09 09:34:18 compute-2 systemd[2768]: Finished Create User's Volatile Files and Directories.
Oct 09 09:34:18 compute-2 systemd[2768]: Listening on D-Bus User Message Bus Socket.
Oct 09 09:34:18 compute-2 systemd[2768]: Reached target Sockets.
Oct 09 09:34:18 compute-2 systemd[2768]: Reached target Basic System.
Oct 09 09:34:18 compute-2 systemd[2768]: Reached target Main User Target.
Oct 09 09:34:18 compute-2 systemd[2768]: Startup finished in 84ms.
Oct 09 09:34:18 compute-2 systemd[1]: Started User Manager for UID 42477.
Oct 09 09:34:18 compute-2 systemd[1]: Started Session 4 of User ceph-admin.
Oct 09 09:34:18 compute-2 sshd-session[2764]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:18 compute-2 sshd-session[2781]: Accepted publickey for ceph-admin from 192.168.122.100 port 42554 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:18 compute-2 systemd-logind[800]: New session 6 of user ceph-admin.
Oct 09 09:34:18 compute-2 systemd[1]: Started Session 6 of User ceph-admin.
Oct 09 09:34:18 compute-2 sshd-session[2781]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:18 compute-2 sudo[2788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:34:18 compute-2 sudo[2788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:18 compute-2 sudo[2788]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:18 compute-2 sshd-session[2813]: Accepted publickey for ceph-admin from 192.168.122.100 port 42558 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:18 compute-2 systemd-logind[800]: New session 7 of user ceph-admin.
Oct 09 09:34:18 compute-2 systemd[1]: Started Session 7 of User ceph-admin.
Oct 09 09:34:18 compute-2 sshd-session[2813]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:18 compute-2 sudo[2817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-2
Oct 09 09:34:18 compute-2 sudo[2817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:18 compute-2 sudo[2817]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:19 compute-2 sshd-session[2842]: Accepted publickey for ceph-admin from 192.168.122.100 port 42566 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:19 compute-2 systemd-logind[800]: New session 8 of user ceph-admin.
Oct 09 09:34:19 compute-2 systemd[1]: Started Session 8 of User ceph-admin.
Oct 09 09:34:19 compute-2 sshd-session[2842]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:19 compute-2 sudo[2846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Oct 09 09:34:19 compute-2 sudo[2846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:19 compute-2 sudo[2846]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:19 compute-2 sshd-session[2871]: Accepted publickey for ceph-admin from 192.168.122.100 port 42570 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:19 compute-2 systemd-logind[800]: New session 9 of user ceph-admin.
Oct 09 09:34:19 compute-2 systemd[1]: Started Session 9 of User ceph-admin.
Oct 09 09:34:19 compute-2 sshd-session[2871]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:19 compute-2 sudo[2875]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:34:19 compute-2 sudo[2875]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:19 compute-2 sudo[2875]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:19 compute-2 sshd-session[2900]: Accepted publickey for ceph-admin from 192.168.122.100 port 42580 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:19 compute-2 systemd-logind[800]: New session 10 of user ceph-admin.
Oct 09 09:34:19 compute-2 systemd[1]: Started Session 10 of User ceph-admin.
Oct 09 09:34:19 compute-2 sshd-session[2900]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:19 compute-2 sudo[2904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:34:19 compute-2 sudo[2904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:19 compute-2 sudo[2904]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:19 compute-2 sshd-session[2929]: Accepted publickey for ceph-admin from 192.168.122.100 port 42584 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:19 compute-2 systemd-logind[800]: New session 11 of user ceph-admin.
Oct 09 09:34:19 compute-2 systemd[1]: Started Session 11 of User ceph-admin.
Oct 09 09:34:19 compute-2 sshd-session[2929]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:19 compute-2 sudo[2933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Oct 09 09:34:19 compute-2 sudo[2933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:19 compute-2 sudo[2933]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:20 compute-2 sshd-session[2958]: Accepted publickey for ceph-admin from 192.168.122.100 port 42586 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:20 compute-2 systemd-logind[800]: New session 12 of user ceph-admin.
Oct 09 09:34:20 compute-2 systemd[1]: Started Session 12 of User ceph-admin.
Oct 09 09:34:20 compute-2 sshd-session[2958]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:20 compute-2 sudo[2962]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:34:20 compute-2 sudo[2962]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:20 compute-2 sudo[2962]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:20 compute-2 sshd-session[2987]: Accepted publickey for ceph-admin from 192.168.122.100 port 42594 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:20 compute-2 systemd-logind[800]: New session 13 of user ceph-admin.
Oct 09 09:34:20 compute-2 systemd[1]: Started Session 13 of User ceph-admin.
Oct 09 09:34:20 compute-2 sshd-session[2987]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:20 compute-2 sudo[2991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new
Oct 09 09:34:20 compute-2 sudo[2991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:20 compute-2 sudo[2991]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:20 compute-2 irqbalance[796]: Cannot change IRQ 44 affinity: Operation not permitted
Oct 09 09:34:20 compute-2 irqbalance[796]: IRQ 44 affinity is now unmanaged
Oct 09 09:34:20 compute-2 sshd-session[3016]: Accepted publickey for ceph-admin from 192.168.122.100 port 42608 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:20 compute-2 systemd-logind[800]: New session 14 of user ceph-admin.
Oct 09 09:34:20 compute-2 systemd[1]: Started Session 14 of User ceph-admin.
Oct 09 09:34:20 compute-2 sshd-session[3016]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:21 compute-2 chronyd[807]: Selected source 69.176.84.79 (pool.ntp.org)
Oct 09 09:34:21 compute-2 sshd-session[3043]: Accepted publickey for ceph-admin from 192.168.122.100 port 42622 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:21 compute-2 systemd-logind[800]: New session 15 of user ceph-admin.
Oct 09 09:34:21 compute-2 systemd[1]: Started Session 15 of User ceph-admin.
Oct 09 09:34:21 compute-2 sshd-session[3043]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:21 compute-2 sudo[3047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36.new /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36
Oct 09 09:34:21 compute-2 sudo[3047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:21 compute-2 sudo[3047]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:21 compute-2 sshd-session[3072]: Accepted publickey for ceph-admin from 192.168.122.100 port 42624 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:34:21 compute-2 systemd-logind[800]: New session 16 of user ceph-admin.
Oct 09 09:34:21 compute-2 systemd[1]: Started Session 16 of User ceph-admin.
Oct 09 09:34:21 compute-2 sshd-session[3072]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:34:21 compute-2 sudo[3076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host --expect-hostname compute-2
Oct 09 09:34:21 compute-2 sudo[3076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:22 compute-2 kernel: evm: overlay not supported
Oct 09 09:34:22 compute-2 systemd[1]: var-lib-containers-storage-overlay-opaque\x2dbug\x2dcheck4032882647-merged.mount: Deactivated successfully.
Oct 09 09:34:22 compute-2 podman[3101]: 2025-10-09 09:34:22.172578442 +0000 UTC m=+0.064718431 system refresh
Oct 09 09:34:22 compute-2 sudo[3076]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:23 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 09:34:50 compute-2 sudo[3123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:34:50 compute-2 sudo[3123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:50 compute-2 sudo[3123]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:50 compute-2 sudo[3148]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:34:50 compute-2 sudo[3148]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:50 compute-2 sudo[3148]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:50 compute-2 sudo[3173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 09 09:34:50 compute-2 sudo[3173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:50 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 09:34:50 compute-2 sudo[3173]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:50 compute-2 sudo[3216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:34:50 compute-2 sudo[3216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:50 compute-2 sudo[3216]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:50 compute-2 sudo[3241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 09:34:50 compute-2 sudo[3241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:50 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 09:34:50 compute-2 sudo[3241]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:50 compute-2 sudo[3296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:34:50 compute-2 sudo[3296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:50 compute-2 sudo[3296]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:50 compute-2 sudo[3321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:34:50 compute-2 sudo[3321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:51 compute-2 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 3356 (sysctl)
Oct 09 09:34:51 compute-2 systemd[1270]: Starting Mark boot as successful...
Oct 09 09:34:51 compute-2 systemd[1270]: Finished Mark boot as successful.
Oct 09 09:34:51 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 09:34:51 compute-2 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct 09 09:34:51 compute-2 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct 09 09:34:51 compute-2 sudo[3321]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:51 compute-2 sudo[3379]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:34:51 compute-2 sudo[3379]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:51 compute-2 sudo[3379]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:51 compute-2 sudo[3404]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 09 09:34:51 compute-2 sudo[3404]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:51 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 09:34:51 compute-2 sudo[3404]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:51 compute-2 sudo[3444]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:34:51 compute-2 sudo[3444]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:51 compute-2 sudo[3444]: pam_unix(sudo:session): session closed for user root
Oct 09 09:34:51 compute-2 sudo[3469]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -- inventory --format=json-pretty --filter-for-batch
Oct 09 09:34:51 compute-2 sudo[3469]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:34:54 compute-2 systemd[1]: var-lib-containers-storage-overlay-compat4151351202-merged.mount: Deactivated successfully.
Oct 09 09:34:54 compute-2 systemd[1]: var-lib-containers-storage-overlay-compat4151351202-lower\x2dmapped.mount: Deactivated successfully.
Oct 09 09:35:09 compute-2 podman[3524]: 2025-10-09 09:35:09.596118296 +0000 UTC m=+17.404629021 container create 30718b6e334edb181168710cbb204ace1b03d67ec48e28bba5c94344e9fde4ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_albattani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct 09 09:35:09 compute-2 podman[3524]: 2025-10-09 09:35:09.584824508 +0000 UTC m=+17.393335233 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:35:09 compute-2 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct 09 09:35:09 compute-2 systemd[1]: Started libpod-conmon-30718b6e334edb181168710cbb204ace1b03d67ec48e28bba5c94344e9fde4ab.scope.
Oct 09 09:35:09 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:35:09 compute-2 podman[3524]: 2025-10-09 09:35:09.688763849 +0000 UTC m=+17.497274584 container init 30718b6e334edb181168710cbb204ace1b03d67ec48e28bba5c94344e9fde4ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_albattani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:35:09 compute-2 podman[3524]: 2025-10-09 09:35:09.695434399 +0000 UTC m=+17.503945124 container start 30718b6e334edb181168710cbb204ace1b03d67ec48e28bba5c94344e9fde4ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_albattani, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Oct 09 09:35:09 compute-2 podman[3524]: 2025-10-09 09:35:09.696722647 +0000 UTC m=+17.505233373 container attach 30718b6e334edb181168710cbb204ace1b03d67ec48e28bba5c94344e9fde4ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_albattani, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:35:09 compute-2 peaceful_albattani[3574]: 167 167
Oct 09 09:35:09 compute-2 systemd[1]: libpod-30718b6e334edb181168710cbb204ace1b03d67ec48e28bba5c94344e9fde4ab.scope: Deactivated successfully.
Oct 09 09:35:09 compute-2 podman[3524]: 2025-10-09 09:35:09.70368643 +0000 UTC m=+17.512197155 container died 30718b6e334edb181168710cbb204ace1b03d67ec48e28bba5c94344e9fde4ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 09 09:35:09 compute-2 systemd[1]: var-lib-containers-storage-overlay-828d4ed0ac1cef143319c9e51d23d12d616c08658b69ae332cab3d6c03a625fe-merged.mount: Deactivated successfully.
Oct 09 09:35:09 compute-2 podman[3524]: 2025-10-09 09:35:09.724658569 +0000 UTC m=+17.533169285 container remove 30718b6e334edb181168710cbb204ace1b03d67ec48e28bba5c94344e9fde4ab (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_albattani, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:35:09 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 09:35:09 compute-2 systemd[1]: libpod-conmon-30718b6e334edb181168710cbb204ace1b03d67ec48e28bba5c94344e9fde4ab.scope: Deactivated successfully.
Oct 09 09:35:09 compute-2 podman[3596]: 2025-10-09 09:35:09.848939029 +0000 UTC m=+0.033119618 container create 4bab9050cd48af560fad5cf97717c2a86758da3661644028b86f4758da0ab7b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:35:09 compute-2 systemd[1]: Started libpod-conmon-4bab9050cd48af560fad5cf97717c2a86758da3661644028b86f4758da0ab7b8.scope.
Oct 09 09:35:09 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:35:09 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea6bfb2fe90351834e926e1696d78d24e9533a865a013c8a5ebdb45f047e797/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:09 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea6bfb2fe90351834e926e1696d78d24e9533a865a013c8a5ebdb45f047e797/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:09 compute-2 podman[3596]: 2025-10-09 09:35:09.911511851 +0000 UTC m=+0.095692459 container init 4bab9050cd48af560fad5cf97717c2a86758da3661644028b86f4758da0ab7b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_sutherland, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:35:09 compute-2 podman[3596]: 2025-10-09 09:35:09.925153307 +0000 UTC m=+0.109333905 container start 4bab9050cd48af560fad5cf97717c2a86758da3661644028b86f4758da0ab7b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_sutherland, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 09 09:35:09 compute-2 podman[3596]: 2025-10-09 09:35:09.926296502 +0000 UTC m=+0.110477100 container attach 4bab9050cd48af560fad5cf97717c2a86758da3661644028b86f4758da0ab7b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 09 09:35:09 compute-2 podman[3596]: 2025-10-09 09:35:09.836175471 +0000 UTC m=+0.020356089 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:35:10 compute-2 brave_sutherland[3610]: [
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:     {
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:         "available": false,
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:         "being_replaced": false,
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:         "ceph_device_lvm": false,
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:         "lsm_data": {},
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:         "lvs": [],
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:         "path": "/dev/sr0",
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:         "rejected_reasons": [
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "Has a FileSystem",
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "Insufficient space (<5GB)"
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:         ],
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:         "sys_api": {
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "actuators": null,
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "device_nodes": [
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:                 "sr0"
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             ],
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "devname": "sr0",
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "human_readable_size": "474.00 KB",
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "id_bus": "ata",
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "model": "QEMU DVD-ROM",
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "nr_requests": "64",
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "parent": "/dev/sr0",
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "partitions": {},
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "path": "/dev/sr0",
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "removable": "1",
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "rev": "2.5+",
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "ro": "0",
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "rotational": "0",
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "sas_address": "",
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "sas_device_handle": "",
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "scheduler_mode": "mq-deadline",
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "sectors": 0,
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "sectorsize": "2048",
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "size": 485376.0,
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "support_discard": "2048",
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "type": "disk",
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:             "vendor": "QEMU"
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:         }
Oct 09 09:35:10 compute-2 brave_sutherland[3610]:     }
Oct 09 09:35:10 compute-2 brave_sutherland[3610]: ]
Oct 09 09:35:10 compute-2 systemd[1]: libpod-4bab9050cd48af560fad5cf97717c2a86758da3661644028b86f4758da0ab7b8.scope: Deactivated successfully.
Oct 09 09:35:10 compute-2 podman[4674]: 2025-10-09 09:35:10.580412535 +0000 UTC m=+0.021378447 container died 4bab9050cd48af560fad5cf97717c2a86758da3661644028b86f4758da0ab7b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_sutherland, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:35:10 compute-2 podman[4674]: 2025-10-09 09:35:10.602488236 +0000 UTC m=+0.043454127 container remove 4bab9050cd48af560fad5cf97717c2a86758da3661644028b86f4758da0ab7b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:35:10 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 09:35:10 compute-2 systemd[1]: libpod-conmon-4bab9050cd48af560fad5cf97717c2a86758da3661644028b86f4758da0ab7b8.scope: Deactivated successfully.
Oct 09 09:35:10 compute-2 sudo[3469]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:10 compute-2 sudo[4686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 09:35:10 compute-2 sudo[4686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:10 compute-2 sudo[4686]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:10 compute-2 sudo[4711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph
Oct 09 09:35:10 compute-2 sudo[4711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:10 compute-2 sudo[4711]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:10 compute-2 sudo[4736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:35:10 compute-2 sudo[4736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:10 compute-2 sudo[4736]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:10 compute-2 sudo[4761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:10 compute-2 sudo[4761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:10 compute-2 sudo[4761]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:10 compute-2 sudo[4786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:35:10 compute-2 sudo[4786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:10 compute-2 sudo[4786]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:10 compute-2 sudo[4834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:35:10 compute-2 sudo[4834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:11 compute-2 sudo[4834]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:11 compute-2 sudo[4859]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:35:11 compute-2 sudo[4859]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:11 compute-2 sudo[4859]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:11 compute-2 sudo[4884]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 09 09:35:11 compute-2 sudo[4884]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:11 compute-2 sudo[4884]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:11 compute-2 sudo[4909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:35:11 compute-2 sudo[4909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:11 compute-2 sudo[4909]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:11 compute-2 sudo[4934]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:35:11 compute-2 sudo[4934]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:11 compute-2 sudo[4934]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:11 compute-2 sudo[4959]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:35:11 compute-2 sudo[4959]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:11 compute-2 sudo[4959]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:11 compute-2 sudo[4984]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:11 compute-2 sudo[4984]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:11 compute-2 sudo[4984]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:11 compute-2 sudo[5009]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:35:11 compute-2 sudo[5009]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:11 compute-2 sudo[5009]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:11 compute-2 sudo[5057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:35:11 compute-2 sudo[5057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:11 compute-2 sudo[5057]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:11 compute-2 sudo[5082]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:35:11 compute-2 sudo[5082]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:11 compute-2 sudo[5082]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:11 compute-2 sudo[5107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:35:11 compute-2 sudo[5107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:11 compute-2 sudo[5107]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:11 compute-2 sudo[5132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 09:35:11 compute-2 sudo[5132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:11 compute-2 sudo[5132]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:11 compute-2 sudo[5157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph
Oct 09 09:35:11 compute-2 sudo[5157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:11 compute-2 sudo[5157]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:11 compute-2 sudo[5182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:35:11 compute-2 sudo[5182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:11 compute-2 sudo[5182]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:11 compute-2 sudo[5207]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:11 compute-2 sudo[5207]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:11 compute-2 sudo[5207]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:11 compute-2 sudo[5232]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:35:11 compute-2 sudo[5232]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:11 compute-2 sudo[5232]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:11 compute-2 sudo[5280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:35:11 compute-2 sudo[5280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:11 compute-2 sudo[5280]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:11 compute-2 sudo[5305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:35:11 compute-2 sudo[5305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:11 compute-2 sudo[5305]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:11 compute-2 sudo[5330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 09 09:35:11 compute-2 sudo[5330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:11 compute-2 sudo[5330]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:11 compute-2 sudo[5355]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:35:11 compute-2 sudo[5355]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:11 compute-2 sudo[5355]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:11 compute-2 sudo[5380]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:35:11 compute-2 sudo[5380]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:12 compute-2 sudo[5380]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:12 compute-2 sudo[5405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:35:12 compute-2 sudo[5405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:12 compute-2 sudo[5405]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:12 compute-2 sudo[5430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:12 compute-2 sudo[5430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:12 compute-2 sudo[5430]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:12 compute-2 sudo[5455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:35:12 compute-2 sudo[5455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:12 compute-2 sudo[5455]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:12 compute-2 sudo[5503]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:35:12 compute-2 sudo[5503]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:12 compute-2 sudo[5503]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:12 compute-2 sudo[5528]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:35:12 compute-2 sudo[5528]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:12 compute-2 sudo[5528]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:12 compute-2 sudo[5553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:35:12 compute-2 sudo[5553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:12 compute-2 sudo[5553]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:12 compute-2 sudo[5578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:35:12 compute-2 sudo[5578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:12 compute-2 sudo[5578]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:12 compute-2 sudo[5603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:12 compute-2 sudo[5603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:12 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 09:35:12 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 09:35:12 compute-2 podman[5662]: 2025-10-09 09:35:12.741459505 +0000 UTC m=+0.031868920 container create 70ccd8a4cace9b58f990b87d59f6c430fa8d487908fac4b19d9225164a96f768 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_turing, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 09 09:35:12 compute-2 systemd[1]: Started libpod-conmon-70ccd8a4cace9b58f990b87d59f6c430fa8d487908fac4b19d9225164a96f768.scope.
Oct 09 09:35:12 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:35:12 compute-2 podman[5662]: 2025-10-09 09:35:12.790188849 +0000 UTC m=+0.080598285 container init 70ccd8a4cace9b58f990b87d59f6c430fa8d487908fac4b19d9225164a96f768 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Oct 09 09:35:12 compute-2 podman[5662]: 2025-10-09 09:35:12.79447859 +0000 UTC m=+0.084888005 container start 70ccd8a4cace9b58f990b87d59f6c430fa8d487908fac4b19d9225164a96f768 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_turing, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 09 09:35:12 compute-2 podman[5662]: 2025-10-09 09:35:12.795769263 +0000 UTC m=+0.086178678 container attach 70ccd8a4cace9b58f990b87d59f6c430fa8d487908fac4b19d9225164a96f768 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_turing, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:35:12 compute-2 trusting_turing[5676]: 167 167
Oct 09 09:35:12 compute-2 systemd[1]: libpod-70ccd8a4cace9b58f990b87d59f6c430fa8d487908fac4b19d9225164a96f768.scope: Deactivated successfully.
Oct 09 09:35:12 compute-2 conmon[5676]: conmon 70ccd8a4cace9b58f990 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70ccd8a4cace9b58f990b87d59f6c430fa8d487908fac4b19d9225164a96f768.scope/container/memory.events
Oct 09 09:35:12 compute-2 podman[5662]: 2025-10-09 09:35:12.798794669 +0000 UTC m=+0.089204224 container died 70ccd8a4cace9b58f990b87d59f6c430fa8d487908fac4b19d9225164a96f768 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_turing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:35:12 compute-2 podman[5662]: 2025-10-09 09:35:12.816028018 +0000 UTC m=+0.106437433 container remove 70ccd8a4cace9b58f990b87d59f6c430fa8d487908fac4b19d9225164a96f768 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_turing, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:35:12 compute-2 podman[5662]: 2025-10-09 09:35:12.729357082 +0000 UTC m=+0.019766517 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:35:12 compute-2 systemd[1]: libpod-conmon-70ccd8a4cace9b58f990b87d59f6c430fa8d487908fac4b19d9225164a96f768.scope: Deactivated successfully.
Oct 09 09:35:12 compute-2 podman[5690]: 2025-10-09 09:35:12.869313715 +0000 UTC m=+0.033368327 container create 9d15bc9be336fca6eeabf967c7b029daefe494904274c55743bb20b6f48052a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_borg, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True)
Oct 09 09:35:12 compute-2 systemd[1]: Started libpod-conmon-9d15bc9be336fca6eeabf967c7b029daefe494904274c55743bb20b6f48052a0.scope.
Oct 09 09:35:12 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:35:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af3806904ce479f5699db731978b876ece72efb5f536983ea729c091b860d28a/merged/tmp/config supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af3806904ce479f5699db731978b876ece72efb5f536983ea729c091b860d28a/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af3806904ce479f5699db731978b876ece72efb5f536983ea729c091b860d28a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af3806904ce479f5699db731978b876ece72efb5f536983ea729c091b860d28a/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:12 compute-2 podman[5690]: 2025-10-09 09:35:12.918360629 +0000 UTC m=+0.082415241 container init 9d15bc9be336fca6eeabf967c7b029daefe494904274c55743bb20b6f48052a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_borg, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 09 09:35:12 compute-2 podman[5690]: 2025-10-09 09:35:12.923708554 +0000 UTC m=+0.087763156 container start 9d15bc9be336fca6eeabf967c7b029daefe494904274c55743bb20b6f48052a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_borg, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 09 09:35:12 compute-2 podman[5690]: 2025-10-09 09:35:12.925333948 +0000 UTC m=+0.089388571 container attach 9d15bc9be336fca6eeabf967c7b029daefe494904274c55743bb20b6f48052a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 09:35:12 compute-2 podman[5690]: 2025-10-09 09:35:12.85517923 +0000 UTC m=+0.019233852 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:35:12 compute-2 systemd[1]: libpod-9d15bc9be336fca6eeabf967c7b029daefe494904274c55743bb20b6f48052a0.scope: Deactivated successfully.
Oct 09 09:35:12 compute-2 podman[5690]: 2025-10-09 09:35:12.972433871 +0000 UTC m=+0.136488473 container died 9d15bc9be336fca6eeabf967c7b029daefe494904274c55743bb20b6f48052a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_borg, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:35:12 compute-2 podman[5690]: 2025-10-09 09:35:12.992235053 +0000 UTC m=+0.156289655 container remove 9d15bc9be336fca6eeabf967c7b029daefe494904274c55743bb20b6f48052a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_borg, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 09 09:35:13 compute-2 systemd[1]: libpod-conmon-9d15bc9be336fca6eeabf967c7b029daefe494904274c55743bb20b6f48052a0.scope: Deactivated successfully.
Oct 09 09:35:13 compute-2 systemd[1]: Reloading.
Oct 09 09:35:13 compute-2 systemd-sysv-generator[5765]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:35:13 compute-2 systemd-rc-local-generator[5761]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:35:13 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 09:35:13 compute-2 systemd[1]: Reloading.
Oct 09 09:35:13 compute-2 systemd-sysv-generator[5801]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:35:13 compute-2 systemd-rc-local-generator[5798]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:35:13 compute-2 systemd[1]: Reached target All Ceph clusters and services.
Oct 09 09:35:13 compute-2 systemd[1]: Reloading.
Oct 09 09:35:13 compute-2 systemd-rc-local-generator[5835]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:35:13 compute-2 systemd-sysv-generator[5839]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:35:13 compute-2 systemd[1]: Reached target Ceph cluster 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:35:13 compute-2 systemd[1]: Reloading.
Oct 09 09:35:13 compute-2 systemd-rc-local-generator[5872]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:35:13 compute-2 systemd-sysv-generator[5875]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:35:13 compute-2 systemd[1]: Reloading.
Oct 09 09:35:13 compute-2 systemd-sysv-generator[5919]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:35:13 compute-2 systemd-rc-local-generator[5913]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:35:14 compute-2 systemd[1]: Created slice Slice /system/ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:35:14 compute-2 systemd[1]: Reached target System Time Set.
Oct 09 09:35:14 compute-2 systemd[1]: Reached target System Time Synchronized.
Oct 09 09:35:14 compute-2 systemd[1]: Starting Ceph mon.compute-2 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:35:14 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 09:35:14 compute-2 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct 09 09:35:14 compute-2 podman[5967]: 2025-10-09 09:35:14.271482158 +0000 UTC m=+0.033302133 container create 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 09 09:35:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ff03d05b04352001d43895168cf2a7ccb22bd63df33cb8494051eacc34df7e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ff03d05b04352001d43895168cf2a7ccb22bd63df33cb8494051eacc34df7e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6ff03d05b04352001d43895168cf2a7ccb22bd63df33cb8494051eacc34df7e/merged/var/lib/ceph/mon/ceph-compute-2 supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:14 compute-2 podman[5967]: 2025-10-09 09:35:14.315913197 +0000 UTC m=+0.077733191 container init 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 09 09:35:14 compute-2 podman[5967]: 2025-10-09 09:35:14.32077221 +0000 UTC m=+0.082592184 container start 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:35:14 compute-2 bash[5967]: 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7
Oct 09 09:35:14 compute-2 podman[5967]: 2025-10-09 09:35:14.257754341 +0000 UTC m=+0.019574335 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:35:14 compute-2 systemd[1]: Started Ceph mon.compute-2 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:35:14 compute-2 ceph-mon[5983]: set uid:gid to 167:167 (ceph:ceph)
Oct 09 09:35:14 compute-2 ceph-mon[5983]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Oct 09 09:35:14 compute-2 ceph-mon[5983]: pidfile_write: ignore empty --pid-file
Oct 09 09:35:14 compute-2 ceph-mon[5983]: load: jerasure load: lrc 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: RocksDB version: 7.9.2
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Git sha 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: DB SUMMARY
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: DB Session ID:  IGXT8FL5CO7VG5U36Z5B
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: CURRENT file:  CURRENT
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: IDENTITY file:  IDENTITY
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-2/store.db dir, Total Num: 0, files: 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-2/store.db: 000004.log size: 511 ; 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                         Options.error_if_exists: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                       Options.create_if_missing: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                         Options.paranoid_checks: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                                     Options.env: 0x56479294bc20
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                                      Options.fs: PosixFileSystem
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                                Options.info_log: 0x5647939cda20
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                Options.max_file_opening_threads: 16
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                              Options.statistics: (nil)
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                               Options.use_fsync: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                       Options.max_log_file_size: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                         Options.allow_fallocate: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                        Options.use_direct_reads: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:          Options.create_missing_column_families: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                              Options.db_log_dir: 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                                 Options.wal_dir: 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                   Options.advise_random_on_open: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                    Options.write_buffer_manager: 0x5647939d1900
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                            Options.rate_limiter: (nil)
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                  Options.unordered_write: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                               Options.row_cache: None
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                              Options.wal_filter: None
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.allow_ingest_behind: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.two_write_queues: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.manual_wal_flush: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.wal_compression: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.atomic_flush: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                 Options.log_readahead_size: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.allow_data_in_errors: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.db_host_id: __hostname__
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.max_background_jobs: 2
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.max_background_compactions: -1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.max_subcompactions: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.max_total_wal_size: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                          Options.max_open_files: -1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                          Options.bytes_per_sync: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:       Options.compaction_readahead_size: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                  Options.max_background_flushes: -1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Compression algorithms supported:
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         kZSTD supported: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         kXpressCompression supported: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         kBZip2Compression supported: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         kLZ4Compression supported: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         kZlibCompression supported: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         kLZ4HCCompression supported: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         kSnappyCompression supported: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:           Options.merge_operator: 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:        Options.compaction_filter: None
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5647939cc5c0)
                                            cache_index_and_filter_blocks: 1
                                            cache_index_and_filter_blocks_with_high_priority: 0
                                            pin_l0_filter_and_index_blocks_in_cache: 0
                                            pin_top_level_index_and_filter: 1
                                            index_type: 0
                                            data_block_index_type: 0
                                            index_shortening: 1
                                            data_block_hash_table_util_ratio: 0.750000
                                            checksum: 4
                                            no_block_cache: 0
                                            block_cache: 0x5647939f1350
                                            block_cache_name: BinnedLRUCache
                                            block_cache_options:
                                              capacity : 536870912
                                              num_shard_bits : 4
                                              strict_capacity_limit : 0
                                              high_pri_pool_ratio: 0.000
                                            block_cache_compressed: (nil)
                                            persistent_cache: (nil)
                                            block_size: 4096
                                            block_size_deviation: 10
                                            block_restart_interval: 16
                                            index_block_restart_interval: 1
                                            metadata_block_size: 4096
                                            partition_filters: 0
                                            use_delta_encoding: 1
                                            filter_policy: bloomfilter
                                            whole_key_filtering: 1
                                            verify_compression: 0
                                            read_amp_bytes_per_bit: 0
                                            format_version: 5
                                            enable_index_compression: 1
                                            block_align: 0
                                            max_auto_readahead_size: 262144
                                            prepopulate_block_cache: 0
                                            initial_auto_readahead_size: 8192
                                            num_file_reads_for_auto_readahead: 2
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:        Options.write_buffer_size: 33554432
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:  Options.max_write_buffer_number: 2
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:          Options.compression: NoCompression
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.num_levels: 7
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:35:14 compute-2 sudo[5603]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-2/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7f5b1458-47b7-4c0b-a668-6fbde19939d2
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002514363222, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002514364133, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1648, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 523, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 401, "raw_average_value_size": 80, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002514364210, "job": 1, "event": "recovery_finished"}
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5647939f2e00
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: DB pointer 0x564793afc000
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2 does not exist in monmap, will attempt to join an existing cluster
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 09:35:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                          ** DB Stats **
                                          Uptime(secs): 0.0 total, 0.0 interval
                                          Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                          Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                          Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                          Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                          
                                          ** Compaction Stats [default] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      1/0    1.61 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Sum      1/0    1.61 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          ** Compaction Stats [default] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 0.0 total, 0.0 interval
                                          Flush(GB): cumulative 0.000, interval 0.000
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.00 GB write, 0.18 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Interval compaction: 0.00 GB write, 0.18 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x5647939f1350#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.2e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.64 KB,0.00012219%)
                                          
                                          ** File Read Latency Histogram By Level [default] **
Oct 09 09:35:14 compute-2 ceph-mon[5983]: using public_addr v2:192.168.122.102:0/0 -> [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0]
Oct 09 09:35:14 compute-2 ceph-mon[5983]: starting mon.compute-2 rank -1 at public addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] at bind addrs [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-2 fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(???) e0 preinit fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).mds e1 new map
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).mds e1 print_map
                                          e1
                                          btime 2025-10-09T09:33:39:705322+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: -1
                                           
                                          No filesystems configured
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e1 e1: 0 total, 0 up, 0 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e2 e2: 0 total, 0 up, 0 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e3 e3: 0 total, 0 up, 0 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e4 e4: 1 total, 0 up, 1 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e5 e5: 2 total, 0 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e6 e6: 2 total, 0 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e7 e7: 2 total, 0 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e8 e8: 2 total, 1 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e9 e9: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e10 e10: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e11 e11: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e12 e12: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e13 e13: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e14 e14: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e15 e15: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e16 e16: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e17 e17: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e18 e18: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e19 e19: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e20 e20: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e21 e21: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e22 e22: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e23 e23: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e23 crush map has features 3314933000852226048, adjusting msgr requires
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e23 crush map has features 288514051259236352, adjusting msgr requires
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e23 crush map has features 288514051259236352, adjusting msgr requires
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).osd e23 crush map has features 288514051259236352, adjusting msgr requires
Oct 09 09:35:14 compute-2 ceph-mon[5983]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: osdmap e6: 2 total, 0 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: Adjusting osd_memory_target on compute-1 to  5248M
Oct 09 09:35:14 compute-2 ceph-mon[5983]: Adjusting osd_memory_target on compute-0 to 128.5M
Oct 09 09:35:14 compute-2 ceph-mon[5983]: Unable to set osd_memory_target on compute-0 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: osdmap e7: 2 total, 0 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: osd.1 [v2:192.168.122.100:6802/3144091891,v1:192.168.122.100:6803/3144091891] boot
Oct 09 09:35:14 compute-2 ceph-mon[5983]: osdmap e8: 2 total, 1 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: purged_snaps scrub starts
Oct 09 09:35:14 compute-2 ceph-mon[5983]: purged_snaps scrub ok
Oct 09 09:35:14 compute-2 ceph-mon[5983]: pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct 09 09:35:14 compute-2 ceph-mon[5983]: OSD bench result of 25996.309425 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/854922803' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: purged_snaps scrub starts
Oct 09 09:35:14 compute-2 ceph-mon[5983]: purged_snaps scrub ok
Oct 09 09:35:14 compute-2 ceph-mon[5983]: OSD bench result of 11440.697696 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: osd.0 [v2:192.168.122.101:6800/3679111284,v1:192.168.122.101:6801/3679111284] boot
Oct 09 09:35:14 compute-2 ceph-mon[5983]: osdmap e9: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3807816729' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: pgmap v31: 1 pgs: 1 unknown; 0 B data, 122 MiB used, 20 GiB / 20 GiB avail
Oct 09 09:35:14 compute-2 ceph-mon[5983]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3807816729' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: osdmap e10: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1972273422' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1972273422' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: osdmap e11: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mgrmap e9: compute-0.lwqgfy(active, since 60s)
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/4109488378' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: pgmap v34: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 148 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/4109488378' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: osdmap e12: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2120229509' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2120229509' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: osdmap e13: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1793952825' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: pgmap v37: 5 pgs: 1 active+clean, 2 unknown, 2 creating+peering; 0 B data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1793952825' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: osdmap e14: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/395083493' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/395083493' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: osdmap e15: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2631429048' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: pgmap v40: 7 pgs: 3 active+clean, 3 unknown, 1 creating+peering; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2631429048' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: osdmap e16: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/992561200' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/992561200' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: osdmap e17: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.1d scrub starts
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.1d scrub ok
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1830712947' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: pgmap v43: 38 pgs: 3 active+clean, 34 unknown, 1 creating+peering; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1830712947' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: osdmap e18: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.1f scrub starts
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.1f scrub ok
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3454543203' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3454543203' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: osdmap e19: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.1b scrub starts
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.1b scrub ok
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/602017510' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: pgmap v46: 38 pgs: 6 active+clean, 32 unknown; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/602017510' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: osdmap e20: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2594759833' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.9 scrub starts
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.9 scrub ok
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2594759833' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: osdmap e21: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.6 deep-scrub starts
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.6 deep-scrub ok
Oct 09 09:35:14 compute-2 ceph-mon[5983]: pgmap v49: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:14 compute-2 ceph-mon[5983]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: osdmap e22: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.1c scrub starts
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.1c scrub ok
Oct 09 09:35:14 compute-2 ceph-mon[5983]: pgmap v51: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3549201441' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3549201441' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.8 scrub starts
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.8 scrub ok
Oct 09 09:35:14 compute-2 ceph-mon[5983]: osdmap e23: 2 total, 2 up, 2 in
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.d deep-scrub starts
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.d deep-scrub ok
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3070980083' entity='client.admin' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.14223 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: Saving service ingress.rgw.default spec with placement count:2
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.7 scrub starts
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.7 scrub ok
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.c scrub starts
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.c scrub ok
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.2 scrub starts
Oct 09 09:35:14 compute-2 ceph-mon[5983]: pgmap v53: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.2 scrub ok
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: Updating compute-2:/etc/ceph/ceph.conf
Oct 09 09:35:14 compute-2 ceph-mon[5983]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:35:14 compute-2 ceph-mon[5983]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.1e scrub starts
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.1e scrub ok
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.5 deep-scrub starts
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.5 deep-scrub ok
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.14225 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: Saving service node-exporter spec with placement *
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: Saving service grafana spec with placement compute-0;count:1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: Saving service prometheus spec with placement compute-0;count:1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: Saving service alertmanager spec with placement compute-0;count:1
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: pgmap v54: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:35:14 compute-2 ceph-mon[5983]: Deploying daemon mon.compute-2 on compute-2
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2266537364' entity='client.admin' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.a scrub starts
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.a scrub ok
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.0 scrub starts
Oct 09 09:35:14 compute-2 ceph-mon[5983]: 2.0 scrub ok
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3921635866' entity='client.admin' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Oct 09 09:35:14 compute-2 ceph-mon[5983]: Cluster is now healthy
Oct 09 09:35:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/4272592449' entity='client.admin' 
Oct 09 09:35:14 compute-2 ceph-mon[5983]: mon.compute-2@-1(synchronizing).paxosservice(auth 1..7) refresh upgraded, format 0 -> 3
Oct 09 09:35:15 compute-2 ceph-mon[5983]: mon.compute-2@-1(probing) e1  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 09 09:35:15 compute-2 ceph-mon[5983]: mon.compute-2@-1(probing) e1  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 09 09:35:16 compute-2 ceph-mon[5983]: mon.compute-2@-1(probing) e2  my rank is now 1 (was -1)
Oct 09 09:35:16 compute-2 ceph-mon[5983]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Oct 09 09:35:16 compute-2 ceph-mon[5983]: paxos.1).electionLogic(0) init, first boot, initializing epoch at 1 
Oct 09 09:35:16 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 09 09:35:17 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 09 09:35:19 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 09 09:35:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct 09 09:35:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e2 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Oct 09 09:35:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 09 09:35:19 compute-2 ceph-mon[5983]: mgrc update_daemon_metadata mon.compute-2 metadata {addrs=[v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC 7763 64-Core Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:04:00.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-2,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865152,os=Linux}
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 2.4 scrub starts
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 2.4 scrub ok
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 2.3 scrub starts
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 2.3 scrub ok
Oct 09 09:35:19 compute-2 ceph-mon[5983]: pgmap v55: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:19 compute-2 ceph-mon[5983]: Deploying daemon mon.compute-1 on compute-1
Oct 09 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 09 09:35:19 compute-2 ceph-mon[5983]: mon.compute-0 calling monitor election
Oct 09 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 2.1 scrub starts
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 2.1 scrub ok
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 2.b scrub starts
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 2.b scrub ok
Oct 09 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 2.10 scrub starts
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 2.10 scrub ok
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 2.f deep-scrub starts
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 2.f deep-scrub ok
Oct 09 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:19 compute-2 ceph-mon[5983]: pgmap v56: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 09:35:19 compute-2 ceph-mon[5983]: mon.compute-2 calling monitor election
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 2.e scrub starts
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 2.e scrub ok
Oct 09 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 2.11 deep-scrub starts
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 2.11 deep-scrub ok
Oct 09 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 2.15 scrub starts
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 2.15 scrub ok
Oct 09 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 2.12 deep-scrub starts
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 2.12 deep-scrub ok
Oct 09 09:35:19 compute-2 ceph-mon[5983]: pgmap v57: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 09:35:19 compute-2 ceph-mon[5983]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Oct 09 09:35:19 compute-2 ceph-mon[5983]: monmap epoch 2
Oct 09 09:35:19 compute-2 ceph-mon[5983]: fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:19 compute-2 ceph-mon[5983]: last_changed 2025-10-09T09:35:14.415832+0000
Oct 09 09:35:19 compute-2 ceph-mon[5983]: created 2025-10-09T09:33:38.201593+0000
Oct 09 09:35:19 compute-2 ceph-mon[5983]: min_mon_release 19 (squid)
Oct 09 09:35:19 compute-2 ceph-mon[5983]: election_strategy: 1
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 09 09:35:19 compute-2 ceph-mon[5983]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct 09 09:35:19 compute-2 ceph-mon[5983]: fsmap 
Oct 09 09:35:19 compute-2 ceph-mon[5983]: osdmap e23: 2 total, 2 up, 2 in
Oct 09 09:35:19 compute-2 ceph-mon[5983]: mgrmap e9: compute-0.lwqgfy(active, since 83s)
Oct 09 09:35:19 compute-2 ceph-mon[5983]: overall HEALTH_OK
Oct 09 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.takdnm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 09 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.takdnm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 09 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 09:35:19 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:35:19 compute-2 sudo[6022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:35:19 compute-2 sudo[6022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:19 compute-2 sudo[6022]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:19 compute-2 sudo[6047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:19 compute-2 sudo[6047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Oct 09 09:35:19 compute-2 ceph-mon[5983]: log_channel(cluster) log [INF] : mon.compute-2 calling monitor election
Oct 09 09:35:19 compute-2 ceph-mon[5983]: paxos.1).electionLogic(10) init, last seen epoch 10
Oct 09 09:35:19 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 09 09:35:19 compute-2 podman[6104]: 2025-10-09 09:35:19.838960145 +0000 UTC m=+0.030979814 container create 531df004245249dfb83f5ef2a914b45278941821cd90afa1b1c775c385a64dd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pasteur, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:35:19 compute-2 systemd[1]: Started libpod-conmon-531df004245249dfb83f5ef2a914b45278941821cd90afa1b1c775c385a64dd1.scope.
Oct 09 09:35:19 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:35:19 compute-2 podman[6104]: 2025-10-09 09:35:19.911232236 +0000 UTC m=+0.103251905 container init 531df004245249dfb83f5ef2a914b45278941821cd90afa1b1c775c385a64dd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pasteur, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 09 09:35:19 compute-2 podman[6104]: 2025-10-09 09:35:19.916137206 +0000 UTC m=+0.108156876 container start 531df004245249dfb83f5ef2a914b45278941821cd90afa1b1c775c385a64dd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 09 09:35:19 compute-2 podman[6104]: 2025-10-09 09:35:19.917768943 +0000 UTC m=+0.109788622 container attach 531df004245249dfb83f5ef2a914b45278941821cd90afa1b1c775c385a64dd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pasteur, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:35:19 compute-2 focused_pasteur[6117]: 167 167
Oct 09 09:35:19 compute-2 systemd[1]: libpod-531df004245249dfb83f5ef2a914b45278941821cd90afa1b1c775c385a64dd1.scope: Deactivated successfully.
Oct 09 09:35:19 compute-2 conmon[6117]: conmon 531df004245249dfb83f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-531df004245249dfb83f5ef2a914b45278941821cd90afa1b1c775c385a64dd1.scope/container/memory.events
Oct 09 09:35:19 compute-2 podman[6104]: 2025-10-09 09:35:19.920685494 +0000 UTC m=+0.112705163 container died 531df004245249dfb83f5ef2a914b45278941821cd90afa1b1c775c385a64dd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:35:19 compute-2 podman[6104]: 2025-10-09 09:35:19.82480451 +0000 UTC m=+0.016824190 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:35:19 compute-2 systemd[1]: var-lib-containers-storage-overlay-2dba510c6dc012c6aa3a67551d5b20cc17ca4d9834dbd897e3b0182da2987968-merged.mount: Deactivated successfully.
Oct 09 09:35:19 compute-2 podman[6104]: 2025-10-09 09:35:19.939364349 +0000 UTC m=+0.131384018 container remove 531df004245249dfb83f5ef2a914b45278941821cd90afa1b1c775c385a64dd1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_pasteur, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 09 09:35:19 compute-2 systemd[1]: libpod-conmon-531df004245249dfb83f5ef2a914b45278941821cd90afa1b1c775c385a64dd1.scope: Deactivated successfully.
Oct 09 09:35:19 compute-2 systemd[1]: Reloading.
Oct 09 09:35:20 compute-2 systemd-rc-local-generator[6149]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:35:20 compute-2 systemd-sysv-generator[6155]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:35:20 compute-2 systemd[1]: Reloading.
Oct 09 09:35:20 compute-2 systemd-rc-local-generator[6196]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:35:20 compute-2 systemd-sysv-generator[6201]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:35:20 compute-2 systemd[1]: Starting Ceph mgr.compute-2.takdnm for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:35:20 compute-2 podman[6248]: 2025-10-09 09:35:20.598001558 +0000 UTC m=+0.027733892 container create ac1c41ea23aace04e6cdc65048ccb460559b2f8095188ec6c54f9b380c1b4c76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:35:20 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fba43d5b2f5ca9d1f59afd307fe74db1ee81c18d93d66367702d14a00b22d23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:20 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fba43d5b2f5ca9d1f59afd307fe74db1ee81c18d93d66367702d14a00b22d23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:20 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fba43d5b2f5ca9d1f59afd307fe74db1ee81c18d93d66367702d14a00b22d23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:20 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fba43d5b2f5ca9d1f59afd307fe74db1ee81c18d93d66367702d14a00b22d23/merged/var/lib/ceph/mgr/ceph-compute-2.takdnm supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:20 compute-2 podman[6248]: 2025-10-09 09:35:20.649195159 +0000 UTC m=+0.078927503 container init ac1c41ea23aace04e6cdc65048ccb460559b2f8095188ec6c54f9b380c1b4c76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct 09 09:35:20 compute-2 podman[6248]: 2025-10-09 09:35:20.654354699 +0000 UTC m=+0.084087023 container start ac1c41ea23aace04e6cdc65048ccb460559b2f8095188ec6c54f9b380c1b4c76 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct 09 09:35:20 compute-2 bash[6248]: ac1c41ea23aace04e6cdc65048ccb460559b2f8095188ec6c54f9b380c1b4c76
Oct 09 09:35:20 compute-2 podman[6248]: 2025-10-09 09:35:20.586507161 +0000 UTC m=+0.016239505 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:35:20 compute-2 systemd[1]: Started Ceph mgr.compute-2.takdnm for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:35:20 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Oct 09 09:35:20 compute-2 sudo[6047]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:20 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Oct 09 09:35:21 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Oct 09 09:35:22 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Oct 09 09:35:23 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Oct 09 09:35:23 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Oct 09 09:35:24 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e3 handle_auth_request failed to assign global_id
Oct 09 09:35:24 compute-2 ceph-mon[5983]: mon.compute-2@1(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 09 09:35:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct 09 09:35:24 compute-2 ceph-mon[5983]: 2.13 scrub starts
Oct 09 09:35:24 compute-2 ceph-mon[5983]: 2.13 scrub ok
Oct 09 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 09 09:35:24 compute-2 ceph-mon[5983]: mon.compute-0 calling monitor election
Oct 09 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 09:35:24 compute-2 ceph-mon[5983]: mon.compute-2 calling monitor election
Oct 09 09:35:24 compute-2 ceph-mon[5983]: pgmap v58: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:24 compute-2 ceph-mon[5983]: 2.17 scrub starts
Oct 09 09:35:24 compute-2 ceph-mon[5983]: 2.17 scrub ok
Oct 09 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:24 compute-2 ceph-mon[5983]: 2.18 scrub starts
Oct 09 09:35:24 compute-2 ceph-mon[5983]: 2.18 scrub ok
Oct 09 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:24 compute-2 ceph-mon[5983]: mon.compute-1 calling monitor election
Oct 09 09:35:24 compute-2 ceph-mon[5983]: pgmap v59: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:24 compute-2 ceph-mon[5983]: 2.1a scrub starts
Oct 09 09:35:24 compute-2 ceph-mon[5983]: 2.1a scrub ok
Oct 09 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:24 compute-2 ceph-mon[5983]: pgmap v60: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:24 compute-2 ceph-mon[5983]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Oct 09 09:35:24 compute-2 ceph-mon[5983]: monmap epoch 3
Oct 09 09:35:24 compute-2 ceph-mon[5983]: fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:24 compute-2 ceph-mon[5983]: last_changed 2025-10-09T09:35:19.619597+0000
Oct 09 09:35:24 compute-2 ceph-mon[5983]: created 2025-10-09T09:33:38.201593+0000
Oct 09 09:35:24 compute-2 ceph-mon[5983]: min_mon_release 19 (squid)
Oct 09 09:35:24 compute-2 ceph-mon[5983]: election_strategy: 1
Oct 09 09:35:24 compute-2 ceph-mon[5983]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Oct 09 09:35:24 compute-2 ceph-mon[5983]: 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Oct 09 09:35:24 compute-2 ceph-mon[5983]: 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Oct 09 09:35:24 compute-2 ceph-mon[5983]: fsmap 
Oct 09 09:35:24 compute-2 ceph-mon[5983]: osdmap e23: 2 total, 2 up, 2 in
Oct 09 09:35:24 compute-2 ceph-mon[5983]: mgrmap e9: compute-0.lwqgfy(active, since 88s)
Oct 09 09:35:24 compute-2 ceph-mon[5983]: overall HEALTH_OK
Oct 09 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.etokpp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 09 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.etokpp", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct 09 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 09:35:24 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:35:25 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Oct 09 09:35:25 compute-2 ceph-mgr[6264]: set uid:gid to 167:167 (ceph:ceph)
Oct 09 09:35:25 compute-2 ceph-mgr[6264]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 09 09:35:25 compute-2 ceph-mgr[6264]: pidfile_write: ignore empty --pid-file
Oct 09 09:35:25 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_auth_request failed to assign global_id
Oct 09 09:35:25 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'alerts'
Oct 09 09:35:25 compute-2 ceph-mgr[6264]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 09:35:25 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'balancer'
Oct 09 09:35:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:25.199+0000 7f55208db140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 09:35:25 compute-2 ceph-mgr[6264]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 09:35:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:25.281+0000 7f55208db140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 09:35:25 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'cephadm'
Oct 09 09:35:25 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'crash'
Oct 09 09:35:25 compute-2 ceph-mgr[6264]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 09:35:25 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'dashboard'
Oct 09 09:35:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:25.976+0000 7f55208db140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 09:35:25 compute-2 sudo[6296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:35:26 compute-2 sudo[6296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:26 compute-2 sudo[6296]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:26 compute-2 sudo[6321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:26 compute-2 sudo[6321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:26 compute-2 ceph-mon[5983]: Deploying daemon mgr.compute-1.etokpp on compute-1
Oct 09 09:35:26 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3098806995' entity='client.admin' 
Oct 09 09:35:26 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:26 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:26 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:26 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:26 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:26 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 09 09:35:26 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct 09 09:35:26 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:35:26 compute-2 podman[6381]: 2025-10-09 09:35:26.32894793 +0000 UTC m=+0.028241208 container create 2335c0a72bbe46f7274dbd94dbc0bf0f0d5b9753372baabf5c4be43397bddf35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_noether, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:35:26 compute-2 systemd[1]: Started libpod-conmon-2335c0a72bbe46f7274dbd94dbc0bf0f0d5b9753372baabf5c4be43397bddf35.scope.
Oct 09 09:35:26 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:35:26 compute-2 podman[6381]: 2025-10-09 09:35:26.376003899 +0000 UTC m=+0.075297177 container init 2335c0a72bbe46f7274dbd94dbc0bf0f0d5b9753372baabf5c4be43397bddf35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 09 09:35:26 compute-2 podman[6381]: 2025-10-09 09:35:26.380888329 +0000 UTC m=+0.080181597 container start 2335c0a72bbe46f7274dbd94dbc0bf0f0d5b9753372baabf5c4be43397bddf35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 09 09:35:26 compute-2 podman[6381]: 2025-10-09 09:35:26.381927504 +0000 UTC m=+0.081220771 container attach 2335c0a72bbe46f7274dbd94dbc0bf0f0d5b9753372baabf5c4be43397bddf35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_noether, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Oct 09 09:35:26 compute-2 elegant_noether[6394]: 167 167
Oct 09 09:35:26 compute-2 systemd[1]: libpod-2335c0a72bbe46f7274dbd94dbc0bf0f0d5b9753372baabf5c4be43397bddf35.scope: Deactivated successfully.
Oct 09 09:35:26 compute-2 conmon[6394]: conmon 2335c0a72bbe46f7274d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2335c0a72bbe46f7274dbd94dbc0bf0f0d5b9753372baabf5c4be43397bddf35.scope/container/memory.events
Oct 09 09:35:26 compute-2 podman[6399]: 2025-10-09 09:35:26.414060193 +0000 UTC m=+0.016239198 container died 2335c0a72bbe46f7274dbd94dbc0bf0f0d5b9753372baabf5c4be43397bddf35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_noether, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct 09 09:35:26 compute-2 podman[6381]: 2025-10-09 09:35:26.318517349 +0000 UTC m=+0.017810638 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:35:26 compute-2 systemd[1]: var-lib-containers-storage-overlay-b64f79332c2fcbff3d3fb3e32b69334d476e4ec5ef444912c1e82aa42f0651cf-merged.mount: Deactivated successfully.
Oct 09 09:35:26 compute-2 podman[6399]: 2025-10-09 09:35:26.435895574 +0000 UTC m=+0.038074569 container remove 2335c0a72bbe46f7274dbd94dbc0bf0f0d5b9753372baabf5c4be43397bddf35 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 09 09:35:26 compute-2 systemd[1]: libpod-conmon-2335c0a72bbe46f7274dbd94dbc0bf0f0d5b9753372baabf5c4be43397bddf35.scope: Deactivated successfully.
Oct 09 09:35:26 compute-2 systemd[1]: Reloading.
Oct 09 09:35:26 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'devicehealth'
Oct 09 09:35:26 compute-2 systemd-sysv-generator[6458]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:35:26 compute-2 ceph-mgr[6264]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 09:35:26 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'diskprediction_local'
Oct 09 09:35:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:26.539+0000 7f55208db140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 09:35:26 compute-2 systemd-rc-local-generator[6455]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:35:26 compute-2 sudo[6435]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uobzeauiwjcqyvzpwoacruuxtqzylahq ; /usr/bin/python3'
Oct 09 09:35:26 compute-2 sudo[6435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:35:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 09 09:35:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 09 09:35:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]:   from numpy import show_config as show_numpy_config
Oct 09 09:35:26 compute-2 ceph-mgr[6264]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 09:35:26 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'influx'
Oct 09 09:35:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:26.687+0000 7f55208db140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 09:35:26 compute-2 systemd[1]: Reloading.
Oct 09 09:35:26 compute-2 ceph-mgr[6264]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 09:35:26 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'insights'
Oct 09 09:35:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:26.753+0000 7f55208db140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 09:35:26 compute-2 systemd-sysv-generator[6497]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:35:26 compute-2 systemd-rc-local-generator[6494]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:35:26 compute-2 python3[6473]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc
                                          _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:35:26 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'iostat'
Oct 09 09:35:26 compute-2 sudo[6435]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:26 compute-2 ceph-mgr[6264]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 09:35:26 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'k8sevents'
Oct 09 09:35:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:26.879+0000 7f55208db140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 09:35:26 compute-2 systemd[1]: Starting Ceph crash.compute-2 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:35:27 compute-2 podman[6561]: 2025-10-09 09:35:27.081760598 +0000 UTC m=+0.030359621 container create fcd5272d81fa2dcefb791e007a9a7adc69f29ccefc09f5587d2725cc8f9ba2e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 09 09:35:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06556f10c38fc830e1631027b776dcf2bcce581d547ad534447eea36a6511c19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06556f10c38fc830e1631027b776dcf2bcce581d547ad534447eea36a6511c19/merged/etc/ceph/ceph.client.crash.compute-2.keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06556f10c38fc830e1631027b776dcf2bcce581d547ad534447eea36a6511c19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06556f10c38fc830e1631027b776dcf2bcce581d547ad534447eea36a6511c19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:27 compute-2 podman[6561]: 2025-10-09 09:35:27.127669048 +0000 UTC m=+0.076268081 container init fcd5272d81fa2dcefb791e007a9a7adc69f29ccefc09f5587d2725cc8f9ba2e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct 09 09:35:27 compute-2 podman[6561]: 2025-10-09 09:35:27.131851372 +0000 UTC m=+0.080450395 container start fcd5272d81fa2dcefb791e007a9a7adc69f29ccefc09f5587d2725cc8f9ba2e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True)
Oct 09 09:35:27 compute-2 bash[6561]: fcd5272d81fa2dcefb791e007a9a7adc69f29ccefc09f5587d2725cc8f9ba2e4
Oct 09 09:35:27 compute-2 podman[6561]: 2025-10-09 09:35:27.069768106 +0000 UTC m=+0.018367149 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:35:27 compute-2 systemd[1]: Started Ceph crash.compute-2 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:35:27 compute-2 sudo[6321]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: INFO:ceph-crash:pinging cluster to exercise our key
Oct 09 09:35:27 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'localpool'
Oct 09 09:35:27 compute-2 sudo[6580]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:35:27 compute-2 sudo[6580]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:27 compute-2 sudo[6580]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: 2025-10-09T09:35:27.266+0000 7f2f00f97640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 09 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: 2025-10-09T09:35:27.266+0000 7f2f00f97640 -1 AuthRegistry(0x7f2efc0696b0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 09 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: 2025-10-09T09:35:27.267+0000 7f2f00f97640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct 09 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: 2025-10-09T09:35:27.267+0000 7f2f00f97640 -1 AuthRegistry(0x7f2f00f95ff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct 09 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: 2025-10-09T09:35:27.268+0000 7f2efad76640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 09 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: 2025-10-09T09:35:27.269+0000 7f2ef9d74640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 09 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: 2025-10-09T09:35:27.269+0000 7f2efa575640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct 09 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: 2025-10-09T09:35:27.269+0000 7f2f00f97640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct 09 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct 09 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-crash-compute-2[6573]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct 09 09:35:27 compute-2 sudo[6605]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 09 09:35:27 compute-2 sudo[6605]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:27 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'mds_autoscaler'
Oct 09 09:35:27 compute-2 ceph-mon[5983]: Deploying daemon crash.compute-2 on compute-2
Oct 09 09:35:27 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2874472706' entity='client.admin' 
Oct 09 09:35:27 compute-2 ceph-mon[5983]: pgmap v61: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:27 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:27 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:27 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:27 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:27 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:35:27 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:35:27 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:35:27 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:35:27 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:35:27 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'mirroring'
Oct 09 09:35:27 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'nfs'
Oct 09 09:35:27 compute-2 podman[6672]: 2025-10-09 09:35:27.600102076 +0000 UTC m=+0.032890933 container create 18458c4de0184929a6b4536ac88178b99a6b4b4ab67aa32b3d952e512dc36d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jemison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:35:27 compute-2 systemd[1]: Started libpod-conmon-18458c4de0184929a6b4536ac88178b99a6b4b4ab67aa32b3d952e512dc36d2f.scope.
Oct 09 09:35:27 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:35:27 compute-2 podman[6672]: 2025-10-09 09:35:27.653473068 +0000 UTC m=+0.086261926 container init 18458c4de0184929a6b4536ac88178b99a6b4b4ab67aa32b3d952e512dc36d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:35:27 compute-2 podman[6672]: 2025-10-09 09:35:27.659209508 +0000 UTC m=+0.091998366 container start 18458c4de0184929a6b4536ac88178b99a6b4b4ab67aa32b3d952e512dc36d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jemison, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 09:35:27 compute-2 podman[6672]: 2025-10-09 09:35:27.660505687 +0000 UTC m=+0.093294546 container attach 18458c4de0184929a6b4536ac88178b99a6b4b4ab67aa32b3d952e512dc36d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jemison, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Oct 09 09:35:27 compute-2 gifted_jemison[6685]: 167 167
Oct 09 09:35:27 compute-2 systemd[1]: libpod-18458c4de0184929a6b4536ac88178b99a6b4b4ab67aa32b3d952e512dc36d2f.scope: Deactivated successfully.
Oct 09 09:35:27 compute-2 podman[6672]: 2025-10-09 09:35:27.664558446 +0000 UTC m=+0.097347304 container died 18458c4de0184929a6b4536ac88178b99a6b4b4ab67aa32b3d952e512dc36d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jemison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:35:27 compute-2 systemd[1]: var-lib-containers-storage-overlay-71f4207de405471f2d8e69bf33bc425fad32b2c3ba5eeaf3feb31a1167716983-merged.mount: Deactivated successfully.
Oct 09 09:35:27 compute-2 podman[6672]: 2025-10-09 09:35:27.585714708 +0000 UTC m=+0.018503576 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:35:27 compute-2 podman[6672]: 2025-10-09 09:35:27.686993179 +0000 UTC m=+0.119782028 container remove 18458c4de0184929a6b4536ac88178b99a6b4b4ab67aa32b3d952e512dc36d2f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:35:27 compute-2 systemd[1]: libpod-conmon-18458c4de0184929a6b4536ac88178b99a6b4b4ab67aa32b3d952e512dc36d2f.scope: Deactivated successfully.
Oct 09 09:35:27 compute-2 ceph-mgr[6264]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 09:35:27 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'orchestrator'
Oct 09 09:35:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:27.793+0000 7f55208db140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 09:35:27 compute-2 podman[6706]: 2025-10-09 09:35:27.807404055 +0000 UTC m=+0.028804863 container create b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 09 09:35:27 compute-2 systemd[1]: Started libpod-conmon-b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a.scope.
Oct 09 09:35:27 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:35:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c7168838ce7c6dcf0c1e03e9da57f06f1537d01154e87bdd22ccf28ce10d011/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c7168838ce7c6dcf0c1e03e9da57f06f1537d01154e87bdd22ccf28ce10d011/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c7168838ce7c6dcf0c1e03e9da57f06f1537d01154e87bdd22ccf28ce10d011/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c7168838ce7c6dcf0c1e03e9da57f06f1537d01154e87bdd22ccf28ce10d011/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:27 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3c7168838ce7c6dcf0c1e03e9da57f06f1537d01154e87bdd22ccf28ce10d011/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:27 compute-2 podman[6706]: 2025-10-09 09:35:27.862179152 +0000 UTC m=+0.083579969 container init b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_tharp, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:35:27 compute-2 podman[6706]: 2025-10-09 09:35:27.867559959 +0000 UTC m=+0.088960766 container start b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_tharp, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct 09 09:35:27 compute-2 podman[6706]: 2025-10-09 09:35:27.86872515 +0000 UTC m=+0.090125958 container attach b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_tharp, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct 09 09:35:27 compute-2 podman[6706]: 2025-10-09 09:35:27.796266188 +0000 UTC m=+0.017667015 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'osd_perf_query'
Oct 09 09:35:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:28.005+0000 7f55208db140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:28.080+0000 7f55208db140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'osd_support'
Oct 09 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'pg_autoscaler'
Oct 09 09:35:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:28.143+0000 7f55208db140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-2 heuristic_tharp[6719]: --> passed data devices: 0 physical, 1 LVM
Oct 09 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'progress'
Oct 09 09:35:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:28.216+0000 7f55208db140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 0493bfe4-e28c-49f6-8185-a07f1e80a32f
Oct 09 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'prometheus'
Oct 09 09:35:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:28.282+0000 7f55208db140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e24 e24: 3 total, 2 up, 3 in
Oct 09 09:35:28 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3618703096' entity='client.admin' 
Oct 09 09:35:28 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1996078233' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 09 09:35:28 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2413203245' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "0493bfe4-e28c-49f6-8185-a07f1e80a32f"}]: dispatch
Oct 09 09:35:28 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2413203245' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "0493bfe4-e28c-49f6-8185-a07f1e80a32f"}]': finished
Oct 09 09:35:28 compute-2 ceph-mon[5983]: osdmap e24: 3 total, 2 up, 3 in
Oct 09 09:35:28 compute-2 ceph-mon[5983]: from='mgr.14122 192.168.122.100:0/4065628814' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Oct 09 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct 09 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rbd_support'
Oct 09 09:35:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:28.604+0000 7f55208db140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-2 lvm[6780]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 09:35:28 compute-2 lvm[6780]: VG ceph_vg0 finished
Oct 09 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 09 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Oct 09 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Oct 09 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'restful'
Oct 09 09:35:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:28.694+0000 7f55208db140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 09:35:28 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rgw'
Oct 09 09:35:28 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon getmap"} v 0)
Oct 09 09:35:28 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/954261656' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 09 09:35:28 compute-2 heuristic_tharp[6719]:  stderr: got monmap epoch 3
Oct 09 09:35:28 compute-2 heuristic_tharp[6719]: --> Creating keyring file for osd.2
Oct 09 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Oct 09 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Oct 09 09:35:28 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 0493bfe4-e28c-49f6-8185-a07f1e80a32f --setuser ceph --setgroup ceph
Oct 09 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rook'
Oct 09 09:35:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:29.095+0000 7f55208db140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 09:35:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e24 _set_new_cache_sizes cache_size:1019927211 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:35:29 compute-2 ceph-mon[5983]: pgmap v62: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:29 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1996078233' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 09 09:35:29 compute-2 ceph-mon[5983]: mgrmap e10: compute-0.lwqgfy(active, since 92s)
Oct 09 09:35:29 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/954261656' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct 09 09:35:29 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/70415478' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 09 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'selftest'
Oct 09 09:35:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:29.611+0000 7f55208db140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 09:35:29 compute-2 sshd-session[3019]: Connection closed by 192.168.122.100 port 42608
Oct 09 09:35:29 compute-2 sshd-session[2787]: Connection closed by 192.168.122.100 port 42554
Oct 09 09:35:29 compute-2 sshd-session[3075]: Connection closed by 192.168.122.100 port 42624
Oct 09 09:35:29 compute-2 sshd-session[2990]: Connection closed by 192.168.122.100 port 42594
Oct 09 09:35:29 compute-2 sshd-session[3046]: Connection closed by 192.168.122.100 port 42622
Oct 09 09:35:29 compute-2 sshd-session[2961]: Connection closed by 192.168.122.100 port 42586
Oct 09 09:35:29 compute-2 sshd-session[2816]: Connection closed by 192.168.122.100 port 42558
Oct 09 09:35:29 compute-2 systemd[1]: session-6.scope: Deactivated successfully.
Oct 09 09:35:29 compute-2 sshd-session[2932]: Connection closed by 192.168.122.100 port 42584
Oct 09 09:35:29 compute-2 sshd-session[2903]: Connection closed by 192.168.122.100 port 42580
Oct 09 09:35:29 compute-2 sshd-session[2874]: Connection closed by 192.168.122.100 port 42570
Oct 09 09:35:29 compute-2 sshd-session[2845]: Connection closed by 192.168.122.100 port 42566
Oct 09 09:35:29 compute-2 sshd-session[2785]: Connection closed by 192.168.122.100 port 42542
Oct 09 09:35:29 compute-2 sshd-session[2781]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-2 sshd-session[2871]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-2 sshd-session[2764]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-2 systemd-logind[800]: Session 6 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-2 sshd-session[3072]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-2 systemd[1]: session-4.scope: Deactivated successfully.
Oct 09 09:35:29 compute-2 systemd-logind[800]: Removed session 6.
Oct 09 09:35:29 compute-2 systemd[1]: session-9.scope: Deactivated successfully.
Oct 09 09:35:29 compute-2 systemd-logind[800]: Session 4 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-2 systemd-logind[800]: Session 16 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-2 systemd-logind[800]: Session 9 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-2 sshd-session[2987]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-2 sshd-session[2958]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-2 systemd-logind[800]: Removed session 4.
Oct 09 09:35:29 compute-2 sshd-session[2900]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-2 systemd[1]: session-13.scope: Deactivated successfully.
Oct 09 09:35:29 compute-2 sshd-session[2842]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-2 systemd[1]: session-10.scope: Deactivated successfully.
Oct 09 09:35:29 compute-2 sshd-session[2813]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-2 systemd[1]: session-12.scope: Deactivated successfully.
Oct 09 09:35:29 compute-2 sshd-session[2929]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-2 systemd[1]: session-7.scope: Deactivated successfully.
Oct 09 09:35:29 compute-2 sshd-session[3043]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-2 systemd[1]: session-8.scope: Deactivated successfully.
Oct 09 09:35:29 compute-2 systemd-logind[800]: Session 13 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-2 systemd[1]: session-11.scope: Deactivated successfully.
Oct 09 09:35:29 compute-2 systemd-logind[800]: Session 10 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-2 systemd-logind[800]: Session 8 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-2 systemd[1]: session-15.scope: Deactivated successfully.
Oct 09 09:35:29 compute-2 systemd-logind[800]: Session 7 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-2 systemd-logind[800]: Session 12 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-2 systemd-logind[800]: Session 11 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-2 systemd-logind[800]: Session 15 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-2 systemd-logind[800]: Removed session 9.
Oct 09 09:35:29 compute-2 sshd-session[3016]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:29 compute-2 systemd[1]: session-14.scope: Deactivated successfully.
Oct 09 09:35:29 compute-2 systemd-logind[800]: Session 14 logged out. Waiting for processes to exit.
Oct 09 09:35:29 compute-2 systemd-logind[800]: Removed session 13.
Oct 09 09:35:29 compute-2 systemd-logind[800]: Removed session 10.
Oct 09 09:35:29 compute-2 systemd-logind[800]: Removed session 12.
Oct 09 09:35:29 compute-2 systemd-logind[800]: Removed session 7.
Oct 09 09:35:29 compute-2 systemd-logind[800]: Removed session 8.
Oct 09 09:35:29 compute-2 systemd-logind[800]: Removed session 11.
Oct 09 09:35:29 compute-2 systemd-logind[800]: Removed session 15.
Oct 09 09:35:29 compute-2 systemd-logind[800]: Removed session 14.
Oct 09 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 09:35:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:29.696+0000 7f55208db140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'snap_schedule'
Oct 09 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 09:35:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:29.771+0000 7f55208db140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'stats'
Oct 09 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'status'
Oct 09 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'telegraf'
Oct 09 09:35:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:29.907+0000 7f55208db140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 09:35:29 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'telemetry'
Oct 09 09:35:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:29.981+0000 7f55208db140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:30.131+0000 7f55208db140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'test_orchestrator'
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:30.338+0000 7f55208db140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'volumes'
Oct 09 09:35:30 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/70415478' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 09 09:35:30 compute-2 ceph-mon[5983]: mgrmap e11: compute-0.lwqgfy(active, since 93s)
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:30.575+0000 7f55208db140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'zabbix'
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:30.642+0000 7f55208db140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: ms_deliver_dispatch: unhandled message 0x55a9027c4d00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  1: '-n'
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  2: 'mgr.compute-2.takdnm'
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  3: '-f'
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  4: '--setuser'
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  5: 'ceph'
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  6: '--setgroup'
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  7: 'ceph'
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  8: '--default-log-to-file=false'
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  9: '--default-log-to-journald=true'
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr respawn  exe_path /proc/self/exe
Oct 09 09:35:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: ignoring --setuser ceph since I am not root
Oct 09 09:35:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: ignoring --setgroup ceph since I am not root
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: pidfile_write: ignore empty --pid-file
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'alerts'
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:30.825+0000 7ff291100140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'balancer'
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:30.902+0000 7ff291100140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 09:35:30 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'cephadm'
Oct 09 09:35:31 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'crash'
Oct 09 09:35:31 compute-2 ceph-mon[5983]: Standby manager daemon compute-2.takdnm started
Oct 09 09:35:31 compute-2 ceph-mon[5983]: Standby manager daemon compute-1.etokpp started
Oct 09 09:35:31 compute-2 ceph-mgr[6264]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 09:35:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:31.613+0000 7ff291100140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 09:35:31 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'dashboard'
Oct 09 09:35:31 compute-2 heuristic_tharp[6719]:  stderr: 2025-10-09T09:35:29.018+0000 7f9a8e6a0740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Oct 09 09:35:31 compute-2 heuristic_tharp[6719]:  stderr: 2025-10-09T09:35:29.289+0000 7f9a8e6a0740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Oct 09 09:35:31 compute-2 heuristic_tharp[6719]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct 09 09:35:31 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 09 09:35:31 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct 09 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'devicehealth'
Oct 09 09:35:32 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Oct 09 09:35:32 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct 09 09:35:32 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 09 09:35:32 compute-2 heuristic_tharp[6719]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 09 09:35:32 compute-2 heuristic_tharp[6719]: --> ceph-volume lvm activate successful for osd ID: 2
Oct 09 09:35:32 compute-2 heuristic_tharp[6719]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct 09 09:35:32 compute-2 systemd[1]: libpod-b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a.scope: Deactivated successfully.
Oct 09 09:35:32 compute-2 systemd[1]: libpod-b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a.scope: Consumed 1.538s CPU time.
Oct 09 09:35:32 compute-2 conmon[6719]: conmon b809419563b3cc170f96 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a.scope/container/memory.events
Oct 09 09:35:32 compute-2 podman[6706]: 2025-10-09 09:35:32.177346398 +0000 UTC m=+4.398747206 container died b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_tharp, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 09 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'diskprediction_local'
Oct 09 09:35:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:32.181+0000 7ff291100140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 09:35:32 compute-2 systemd[1]: var-lib-containers-storage-overlay-3c7168838ce7c6dcf0c1e03e9da57f06f1537d01154e87bdd22ccf28ce10d011-merged.mount: Deactivated successfully.
Oct 09 09:35:32 compute-2 podman[6706]: 2025-10-09 09:35:32.210080404 +0000 UTC m=+4.431481212 container remove b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=heuristic_tharp, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:35:32 compute-2 systemd[1]: libpod-conmon-b809419563b3cc170f96bb60a24d855f16f4110440ffd2046bb5473cd719322a.scope: Deactivated successfully.
Oct 09 09:35:32 compute-2 sudo[6605]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:32 compute-2 systemd[1]: session-16.scope: Deactivated successfully.
Oct 09 09:35:32 compute-2 systemd[1]: session-16.scope: Consumed 42.981s CPU time.
Oct 09 09:35:32 compute-2 systemd-logind[800]: Removed session 16.
Oct 09 09:35:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 09 09:35:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 09 09:35:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]:   from numpy import show_config as show_numpy_config
Oct 09 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 09:35:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:32.338+0000 7ff291100140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'influx'
Oct 09 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'insights'
Oct 09 09:35:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:32.402+0000 7ff291100140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'iostat'
Oct 09 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'k8sevents'
Oct 09 09:35:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:32.526+0000 7ff291100140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 09:35:32 compute-2 ceph-mon[5983]: mgrmap e12: compute-0.lwqgfy(active, since 95s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'localpool'
Oct 09 09:35:32 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'mds_autoscaler'
Oct 09 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'mirroring'
Oct 09 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'nfs'
Oct 09 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 09:35:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:33.403+0000 7ff291100140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'orchestrator'
Oct 09 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:33.599+0000 7ff291100140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'osd_perf_query'
Oct 09 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 09:35:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:33.667+0000 7ff291100140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'osd_support'
Oct 09 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 09:35:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:33.726+0000 7ff291100140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'pg_autoscaler'
Oct 09 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 09:35:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:33.795+0000 7ff291100140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'progress'
Oct 09 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 09:35:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:33.857+0000 7ff291100140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 09:35:33 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'prometheus'
Oct 09 09:35:34 compute-2 ceph-mgr[6264]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 09:35:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:34.161+0000 7ff291100140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 09:35:34 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rbd_support'
Oct 09 09:35:34 compute-2 ceph-mgr[6264]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 09:35:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:34.246+0000 7ff291100140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 09:35:34 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'restful'
Oct 09 09:35:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e24 _set_new_cache_sizes cache_size:1020053014 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:35:34 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rgw'
Oct 09 09:35:34 compute-2 ceph-mgr[6264]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 09:35:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:34.628+0000 7ff291100140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 09:35:34 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rook'
Oct 09 09:35:35 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e25 e25: 3 total, 2 up, 3 in
Oct 09 09:35:35 compute-2 ceph-mon[5983]: Active manager daemon compute-0.lwqgfy restarted
Oct 09 09:35:35 compute-2 ceph-mon[5983]: Activating manager daemon compute-0.lwqgfy
Oct 09 09:35:35 compute-2 ceph-mon[5983]: osdmap e25: 3 total, 2 up, 3 in
Oct 09 09:35:35 compute-2 ceph-mon[5983]: mgrmap e13: compute-0.lwqgfy(active, starting, since 0.015525s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:35:35 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 09 09:35:35 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:35 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 09:35:35 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-0.lwqgfy", "id": "compute-0.lwqgfy"}]: dispatch
Oct 09 09:35:35 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-2.takdnm", "id": "compute-2.takdnm"}]: dispatch
Oct 09 09:35:35 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-1.etokpp", "id": "compute-1.etokpp"}]: dispatch
Oct 09 09:35:35 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 09:35:35 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 09:35:35 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:35:35 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 09 09:35:35 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 09:35:35 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 09 09:35:35 compute-2 ceph-mon[5983]: Manager daemon compute-0.lwqgfy is now available
Oct 09 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:35.123+0000 7ff291100140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'selftest'
Oct 09 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:35.185+0000 7ff291100140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'snap_schedule'
Oct 09 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:35.254+0000 7ff291100140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'stats'
Oct 09 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'status'
Oct 09 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:35.381+0000 7ff291100140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'telegraf'
Oct 09 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:35.442+0000 7ff291100140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'telemetry'
Oct 09 09:35:35 compute-2 sshd-session[7725]: Accepted publickey for ceph-admin from 192.168.122.100 port 51782 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:35:35 compute-2 systemd-logind[800]: New session 17 of user ceph-admin.
Oct 09 09:35:35 compute-2 systemd[1]: Started Session 17 of User ceph-admin.
Oct 09 09:35:35 compute-2 sshd-session[7725]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:35:35 compute-2 sudo[7729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:35:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:35.579+0000 7ff291100140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'test_orchestrator'
Oct 09 09:35:35 compute-2 sudo[7729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:35 compute-2 sudo[7729]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:35 compute-2 sudo[7754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 09:35:35 compute-2 sudo[7754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:35.768+0000 7ff291100140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:35 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'volumes'
Oct 09 09:35:36 compute-2 ceph-mgr[6264]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 09:35:36 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'zabbix'
Oct 09 09:35:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:36.001+0000 7ff291100140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 09:35:36 compute-2 podman[7834]: 2025-10-09 09:35:36.025621579 +0000 UTC m=+0.043135432 container exec 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:35:36 compute-2 ceph-mgr[6264]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 09:35:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:36.064+0000 7ff291100140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 09:35:36 compute-2 ceph-mgr[6264]: ms_deliver_dispatch: unhandled message 0x556e0c442d00 mon_map magic: 0 from mon.1 v2:192.168.122.102:3300/0
Oct 09 09:35:36 compute-2 ceph-mgr[6264]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 09 09:35:36 compute-2 ceph-mgr[6264]: mgr load Constructed class from module: dashboard
Oct 09 09:35:36 compute-2 ceph-mgr[6264]: [dashboard INFO root] server: ssl=no host=192.168.122.102 port=8443
Oct 09 09:35:36 compute-2 ceph-mgr[6264]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 09 09:35:36 compute-2 ceph-mgr[6264]: [dashboard INFO root] Starting engine...
Oct 09 09:35:36 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct 09 09:35:36 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct 09 09:35:36 compute-2 ceph-mon[5983]: mgrmap e14: compute-0.lwqgfy(active, since 1.02391s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:35:36 compute-2 ceph-mon[5983]: Standby manager daemon compute-2.takdnm restarted
Oct 09 09:35:36 compute-2 ceph-mon[5983]: Standby manager daemon compute-2.takdnm started
Oct 09 09:35:36 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:36 compute-2 podman[7834]: 2025-10-09 09:35:36.117256197 +0000 UTC m=+0.134770050 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Oct 09 09:35:36 compute-2 ceph-mgr[6264]: [dashboard INFO root] Engine started...
Oct 09 09:35:36 compute-2 sudo[7754]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:36 compute-2 sudo[7902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:35:36 compute-2 sudo[7902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:36 compute-2 sudo[7902]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:36 compute-2 sudo[7927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:35:36 compute-2 sudo[7927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:36 compute-2 sudo[7927]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:36 compute-2 sudo[7981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:35:36 compute-2 sudo[7981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:36 compute-2 sudo[7981]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:36 compute-2 sudo[8006]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 09 09:35:36 compute-2 sudo[8006]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-2 sudo[8006]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-2 sudo[8047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 09:35:37 compute-2 sudo[8047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-2 sudo[8047]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-2 sudo[8072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph
Oct 09 09:35:37 compute-2 sudo[8072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-2 sudo[8072]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-2 ceph-mon[5983]: from='client.14292 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 09:35:37 compute-2 ceph-mon[5983]: pgmap v3: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-2 ceph-mon[5983]: Standby manager daemon compute-1.etokpp restarted
Oct 09 09:35:37 compute-2 ceph-mon[5983]: Standby manager daemon compute-1.etokpp started
Oct 09 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-2 ceph-mon[5983]: mgrmap e15: compute-0.lwqgfy(active, since 2s), standbys: compute-1.etokpp, compute-2.takdnm
Oct 09 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 09 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct 09 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 09 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:35:37 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:35:37 compute-2 sudo[8097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:35:37 compute-2 sudo[8097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-2 sudo[8097]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-2 sudo[8122]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:37 compute-2 sudo[8122]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-2 sudo[8122]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-2 sudo[8147]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:35:37 compute-2 sudo[8147]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-2 sudo[8147]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-2 sudo[8195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:35:37 compute-2 sudo[8195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-2 sudo[8195]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-2 sudo[8220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:35:37 compute-2 sudo[8220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-2 sudo[8220]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-2 sudo[8245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 09 09:35:37 compute-2 sudo[8245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-2 sudo[8245]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-2 sudo[8270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:35:37 compute-2 sudo[8270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-2 sudo[8270]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-2 sudo[8295]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:35:37 compute-2 sudo[8295]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-2 sudo[8295]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-2 sudo[8320]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:35:37 compute-2 sudo[8320]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-2 sudo[8320]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-2 sudo[8345]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:37 compute-2 sudo[8345]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-2 sudo[8345]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-2 sudo[8370]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:35:37 compute-2 sudo[8370]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-2 sudo[8370]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:37 compute-2 sudo[8418]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:35:37 compute-2 sudo[8418]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:37 compute-2 sudo[8418]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-2 sudo[8443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:35:38 compute-2 sudo[8443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-2 sudo[8443]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-2 sudo[8468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:35:38 compute-2 sudo[8468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-2 sudo[8468]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-2 sudo[8493]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 09:35:38 compute-2 sudo[8493]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-2 sudo[8493]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-2 sudo[8518]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph
Oct 09 09:35:38 compute-2 sudo[8518]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-2 sudo[8518]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-2 sudo[8543]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:35:38 compute-2 sudo[8543]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-2 sudo[8543]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-2 sudo[8568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:38 compute-2 sudo[8568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-2 sudo[8568]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-2 sudo[8593]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:35:38 compute-2 sudo[8593]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-2 sudo[8593]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-2 sudo[8641]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:35:38 compute-2 sudo[8641]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-2 sudo[8641]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-2 sudo[8666]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:35:38 compute-2 sudo[8666]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-2 sudo[8666]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-2 sudo[8691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 09 09:35:38 compute-2 sudo[8691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-2 sudo[8691]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-2 ceph-mon[5983]: [09/Oct/2025:09:35:36] ENGINE Bus STARTING
Oct 09 09:35:38 compute-2 ceph-mon[5983]: [09/Oct/2025:09:35:36] ENGINE Serving on http://192.168.122.100:8765
Oct 09 09:35:38 compute-2 ceph-mon[5983]: pgmap v4: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:38 compute-2 ceph-mon[5983]: [09/Oct/2025:09:35:37] ENGINE Serving on https://192.168.122.100:7150
Oct 09 09:35:38 compute-2 ceph-mon[5983]: [09/Oct/2025:09:35:37] ENGINE Client ('192.168.122.100', 44370) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 09 09:35:38 compute-2 ceph-mon[5983]: [09/Oct/2025:09:35:37] ENGINE Bus STARTED
Oct 09 09:35:38 compute-2 ceph-mon[5983]: Adjusting osd_memory_target on compute-0 to 128.5M
Oct 09 09:35:38 compute-2 ceph-mon[5983]: Adjusting osd_memory_target on compute-1 to 128.5M
Oct 09 09:35:38 compute-2 ceph-mon[5983]: Unable to set osd_memory_target on compute-0 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct 09 09:35:38 compute-2 ceph-mon[5983]: Unable to set osd_memory_target on compute-1 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct 09 09:35:38 compute-2 ceph-mon[5983]: Updating compute-0:/etc/ceph/ceph.conf
Oct 09 09:35:38 compute-2 ceph-mon[5983]: Updating compute-1:/etc/ceph/ceph.conf
Oct 09 09:35:38 compute-2 ceph-mon[5983]: Updating compute-2:/etc/ceph/ceph.conf
Oct 09 09:35:38 compute-2 ceph-mon[5983]: from='client.14328 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 09:35:38 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:38 compute-2 ceph-mon[5983]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:35:38 compute-2 ceph-mon[5983]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:35:38 compute-2 ceph-mon[5983]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:35:38 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:38 compute-2 sudo[8716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:35:38 compute-2 sudo[8716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-2 sudo[8716]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-2 sudo[8741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:35:38 compute-2 sudo[8741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-2 sudo[8741]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-2 sudo[8766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:35:38 compute-2 sudo[8766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-2 sudo[8766]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-2 sudo[8791]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:38 compute-2 sudo[8791]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-2 sudo[8791]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-2 sudo[8816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:35:38 compute-2 sudo[8816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-2 sudo[8816]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-2 sudo[8864]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:35:38 compute-2 sudo[8864]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-2 sudo[8864]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-2 sudo[8889]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:35:38 compute-2 sudo[8889]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-2 sudo[8889]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:38 compute-2 sudo[8914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:35:38 compute-2 sudo[8914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:38 compute-2 sudo[8914]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e25 _set_new_cache_sizes cache_size:1020054705 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:35:39 compute-2 ceph-mon[5983]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 09 09:35:39 compute-2 ceph-mon[5983]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 09 09:35:39 compute-2 ceph-mon[5983]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 09 09:35:39 compute-2 ceph-mon[5983]: from='client.14334 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 09:35:39 compute-2 ceph-mon[5983]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:35:39 compute-2 ceph-mon[5983]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:35:39 compute-2 ceph-mon[5983]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:35:39 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:39 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:39 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:39 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:39 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:39 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:39 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:39 compute-2 ceph-mon[5983]: from='mgr.24122 192.168.122.100:0/1361071031' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:39 compute-2 ceph-mgr[6264]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 09 09:35:39 compute-2 ceph-mgr[6264]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 09 09:35:39 compute-2 ceph-mgr[6264]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 09 09:35:39 compute-2 ceph-mgr[6264]: mgr respawn  1: '-n'
Oct 09 09:35:39 compute-2 ceph-mgr[6264]: mgr respawn  2: 'mgr.compute-2.takdnm'
Oct 09 09:35:40 compute-2 sshd-session[7728]: Connection closed by 192.168.122.100 port 51782
Oct 09 09:35:40 compute-2 sshd-session[7725]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:35:40 compute-2 systemd[1]: session-17.scope: Deactivated successfully.
Oct 09 09:35:40 compute-2 systemd[1]: session-17.scope: Consumed 3.381s CPU time.
Oct 09 09:35:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: ignoring --setuser ceph since I am not root
Oct 09 09:35:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: ignoring --setgroup ceph since I am not root
Oct 09 09:35:40 compute-2 systemd-logind[800]: Session 17 logged out. Waiting for processes to exit.
Oct 09 09:35:40 compute-2 systemd-logind[800]: Removed session 17.
Oct 09 09:35:40 compute-2 ceph-mgr[6264]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 09 09:35:40 compute-2 ceph-mgr[6264]: pidfile_write: ignore empty --pid-file
Oct 09 09:35:40 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'alerts'
Oct 09 09:35:40 compute-2 ceph-mgr[6264]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 09:35:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:40.193+0000 7fce0fa2d140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 09:35:40 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'balancer'
Oct 09 09:35:40 compute-2 ceph-mgr[6264]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 09:35:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:40.276+0000 7fce0fa2d140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 09:35:40 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'cephadm'
Oct 09 09:35:40 compute-2 ceph-mon[5983]: from='client.14340 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 09:35:40 compute-2 ceph-mon[5983]: Deploying daemon node-exporter.compute-0 on compute-0
Oct 09 09:35:40 compute-2 ceph-mon[5983]: pgmap v5: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:40 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/536206930' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Oct 09 09:35:40 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/536206930' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Oct 09 09:35:40 compute-2 ceph-mon[5983]: mgrmap e16: compute-0.lwqgfy(active, since 4s), standbys: compute-1.etokpp, compute-2.takdnm
Oct 09 09:35:40 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'crash'
Oct 09 09:35:40 compute-2 ceph-mgr[6264]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 09:35:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:40.989+0000 7fce0fa2d140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 09:35:40 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'dashboard'
Oct 09 09:35:41 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'devicehealth'
Oct 09 09:35:41 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1543803184' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Oct 09 09:35:41 compute-2 ceph-mgr[6264]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 09:35:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:41.557+0000 7fce0fa2d140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 09:35:41 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'diskprediction_local'
Oct 09 09:35:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 09 09:35:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 09 09:35:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]:   from numpy import show_config as show_numpy_config
Oct 09 09:35:41 compute-2 ceph-mgr[6264]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 09:35:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:41.718+0000 7fce0fa2d140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 09:35:41 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'influx'
Oct 09 09:35:41 compute-2 ceph-mgr[6264]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 09:35:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:41.786+0000 7fce0fa2d140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 09:35:41 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'insights'
Oct 09 09:35:41 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'iostat'
Oct 09 09:35:41 compute-2 ceph-mgr[6264]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 09:35:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:41.909+0000 7fce0fa2d140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 09:35:41 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'k8sevents'
Oct 09 09:35:42 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'localpool'
Oct 09 09:35:42 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'mds_autoscaler'
Oct 09 09:35:42 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'mirroring'
Oct 09 09:35:42 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1543803184' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Oct 09 09:35:42 compute-2 ceph-mon[5983]: mgrmap e17: compute-0.lwqgfy(active, since 6s), standbys: compute-1.etokpp, compute-2.takdnm
Oct 09 09:35:42 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'nfs'
Oct 09 09:35:42 compute-2 ceph-mgr[6264]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 09:35:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:42.769+0000 7fce0fa2d140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 09:35:42 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'orchestrator'
Oct 09 09:35:42 compute-2 ceph-mgr[6264]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:42.964+0000 7fce0fa2d140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:42 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'osd_perf_query'
Oct 09 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:43.030+0000 7fce0fa2d140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'osd_support'
Oct 09 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:43.088+0000 7fce0fa2d140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'pg_autoscaler'
Oct 09 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:43.157+0000 7fce0fa2d140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'progress'
Oct 09 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:43.220+0000 7fce0fa2d140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'prometheus'
Oct 09 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:43.518+0000 7fce0fa2d140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rbd_support'
Oct 09 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:43.603+0000 7fce0fa2d140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'restful'
Oct 09 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rgw'
Oct 09 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:43.979+0000 7fce0fa2d140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 09:35:43 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rook'
Oct 09 09:35:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e25 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:44.464+0000 7fce0fa2d140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'selftest'
Oct 09 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'snap_schedule'
Oct 09 09:35:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:44.526+0000 7fce0fa2d140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:44.596+0000 7fce0fa2d140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'stats'
Oct 09 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'status'
Oct 09 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'telegraf'
Oct 09 09:35:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:44.725+0000 7fce0fa2d140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'telemetry'
Oct 09 09:35:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:44.787+0000 7fce0fa2d140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 09:35:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'test_orchestrator'
Oct 09 09:35:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:44.929+0000 7fce0fa2d140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'volumes'
Oct 09 09:35:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:45.126+0000 7fce0fa2d140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:45 compute-2 ceph-mon[5983]: Standby manager daemon compute-1.etokpp restarted
Oct 09 09:35:45 compute-2 ceph-mon[5983]: Standby manager daemon compute-1.etokpp started
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 09:35:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:45.372+0000 7fce0fa2d140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'zabbix'
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 09:35:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:45.439+0000 7fce0fa2d140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: ms_deliver_dispatch: unhandled message 0x55674ea03860 mon_map magic: 0 from mon.1 v2:192.168.122.102:3300/0
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  1: '-n'
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  2: 'mgr.compute-2.takdnm'
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  3: '-f'
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  4: '--setuser'
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  5: 'ceph'
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  6: '--setgroup'
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  7: 'ceph'
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  8: '--default-log-to-file=false'
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  9: '--default-log-to-journald=true'
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr respawn  exe_path /proc/self/exe
Oct 09 09:35:45 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e26 e26: 3 total, 2 up, 3 in
Oct 09 09:35:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: ignoring --setuser ceph since I am not root
Oct 09 09:35:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: ignoring --setgroup ceph since I am not root
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: pidfile_write: ignore empty --pid-file
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'alerts'
Oct 09 09:35:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:45.624+0000 7fcbf314c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'balancer'
Oct 09 09:35:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:45.698+0000 7fcbf314c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 09:35:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'cephadm'
Oct 09 09:35:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'crash'
Oct 09 09:35:46 compute-2 ceph-mon[5983]: mgrmap e18: compute-0.lwqgfy(active, since 10s), standbys: compute-1.etokpp, compute-2.takdnm
Oct 09 09:35:46 compute-2 ceph-mon[5983]: Standby manager daemon compute-2.takdnm restarted
Oct 09 09:35:46 compute-2 ceph-mon[5983]: Standby manager daemon compute-2.takdnm started
Oct 09 09:35:46 compute-2 ceph-mon[5983]: Active manager daemon compute-0.lwqgfy restarted
Oct 09 09:35:46 compute-2 ceph-mon[5983]: Activating manager daemon compute-0.lwqgfy
Oct 09 09:35:46 compute-2 ceph-mon[5983]: osdmap e26: 3 total, 2 up, 3 in
Oct 09 09:35:46 compute-2 ceph-mon[5983]: mgrmap e19: compute-0.lwqgfy(active, starting, since 0.0135589s), standbys: compute-1.etokpp, compute-2.takdnm
Oct 09 09:35:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:46.390+0000 7fcbf314c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 09:35:46 compute-2 ceph-mgr[6264]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 09:35:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'dashboard'
Oct 09 09:35:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'devicehealth'
Oct 09 09:35:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:46.942+0000 7fcbf314c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 09:35:46 compute-2 ceph-mgr[6264]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 09:35:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'diskprediction_local'
Oct 09 09:35:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 09 09:35:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 09 09:35:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]:   from numpy import show_config as show_numpy_config
Oct 09 09:35:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:47.085+0000 7fcbf314c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'influx'
Oct 09 09:35:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:47.154+0000 7fcbf314c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'insights'
Oct 09 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'iostat'
Oct 09 09:35:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:47.273+0000 7fcbf314c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'k8sevents'
Oct 09 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'localpool'
Oct 09 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'mds_autoscaler'
Oct 09 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'mirroring'
Oct 09 09:35:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'nfs'
Oct 09 09:35:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:48.143+0000 7fcbf314c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'orchestrator'
Oct 09 09:35:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:48.332+0000 7fcbf314c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'osd_perf_query'
Oct 09 09:35:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:48.405+0000 7fcbf314c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'osd_support'
Oct 09 09:35:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:48.465+0000 7fcbf314c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'pg_autoscaler'
Oct 09 09:35:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:48.533+0000 7fcbf314c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'progress'
Oct 09 09:35:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:48.596+0000 7fcbf314c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'prometheus'
Oct 09 09:35:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:48.903+0000 7fcbf314c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rbd_support'
Oct 09 09:35:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:48.989+0000 7fcbf314c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 09:35:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'restful'
Oct 09 09:35:49 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rgw'
Oct 09 09:35:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:49.374+0000 7fcbf314c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 09:35:49 compute-2 ceph-mgr[6264]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 09:35:49 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rook'
Oct 09 09:35:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:35:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:49.874+0000 7fcbf314c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 09:35:49 compute-2 ceph-mgr[6264]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 09:35:49 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'selftest'
Oct 09 09:35:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:49.938+0000 7fcbf314c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 09:35:49 compute-2 ceph-mgr[6264]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 09:35:49 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'snap_schedule'
Oct 09 09:35:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:50.009+0000 7fcbf314c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'stats'
Oct 09 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'status'
Oct 09 09:35:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:50.141+0000 7fcbf314c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'telegraf'
Oct 09 09:35:50 compute-2 systemd[1]: Stopping User Manager for UID 42477...
Oct 09 09:35:50 compute-2 systemd[2768]: Activating special unit Exit the Session...
Oct 09 09:35:50 compute-2 systemd[2768]: Stopped target Main User Target.
Oct 09 09:35:50 compute-2 systemd[2768]: Stopped target Basic System.
Oct 09 09:35:50 compute-2 systemd[2768]: Stopped target Paths.
Oct 09 09:35:50 compute-2 systemd[2768]: Stopped target Sockets.
Oct 09 09:35:50 compute-2 systemd[2768]: Stopped target Timers.
Oct 09 09:35:50 compute-2 systemd[2768]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 09 09:35:50 compute-2 systemd[2768]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 09 09:35:50 compute-2 systemd[2768]: Closed D-Bus User Message Bus Socket.
Oct 09 09:35:50 compute-2 systemd[2768]: Stopped Create User's Volatile Files and Directories.
Oct 09 09:35:50 compute-2 systemd[2768]: Removed slice User Application Slice.
Oct 09 09:35:50 compute-2 systemd[2768]: Reached target Shutdown.
Oct 09 09:35:50 compute-2 systemd[2768]: Finished Exit the Session.
Oct 09 09:35:50 compute-2 systemd[2768]: Reached target Exit the Session.
Oct 09 09:35:50 compute-2 systemd[1]: user@42477.service: Deactivated successfully.
Oct 09 09:35:50 compute-2 systemd[1]: Stopped User Manager for UID 42477.
Oct 09 09:35:50 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct 09 09:35:50 compute-2 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct 09 09:35:50 compute-2 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct 09 09:35:50 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct 09 09:35:50 compute-2 systemd[1]: Removed slice User Slice of UID 42477.
Oct 09 09:35:50 compute-2 systemd[1]: user-42477.slice: Consumed 47.105s CPU time.
Oct 09 09:35:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:50.209+0000 7fcbf314c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'telemetry'
Oct 09 09:35:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:50.347+0000 7fcbf314c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'test_orchestrator'
Oct 09 09:35:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:50.543+0000 7fcbf314c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'volumes'
Oct 09 09:35:50 compute-2 ceph-mon[5983]: Standby manager daemon compute-1.etokpp restarted
Oct 09 09:35:50 compute-2 ceph-mon[5983]: Standby manager daemon compute-1.etokpp started
Oct 09 09:35:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e27 e27: 3 total, 2 up, 3 in
Oct 09 09:35:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:50.779+0000 7fcbf314c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'zabbix'
Oct 09 09:35:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:35:50.843+0000 7fcbf314c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 09:35:50 compute-2 ceph-mgr[6264]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 09:35:50 compute-2 ceph-mgr[6264]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 09 09:35:50 compute-2 ceph-mgr[6264]: mgr load Constructed class from module: dashboard
Oct 09 09:35:50 compute-2 ceph-mgr[6264]: ms_deliver_dispatch: unhandled message 0x56150f233860 mon_map magic: 0 from mon.1 v2:192.168.122.102:3300/0
Oct 09 09:35:50 compute-2 ceph-mgr[6264]: [dashboard INFO root] server: ssl=no host=192.168.122.102 port=8443
Oct 09 09:35:50 compute-2 ceph-mgr[6264]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 09 09:35:50 compute-2 ceph-mgr[6264]: [dashboard INFO root] Starting engine...
Oct 09 09:35:50 compute-2 ceph-mgr[6264]: [dashboard INFO root] Engine started...
Oct 09 09:35:51 compute-2 sshd-session[9014]: Accepted publickey for ceph-admin from 192.168.122.100 port 52662 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:35:51 compute-2 systemd-logind[800]: New session 18 of user ceph-admin.
Oct 09 09:35:51 compute-2 systemd[1]: Created slice User Slice of UID 42477.
Oct 09 09:35:51 compute-2 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct 09 09:35:51 compute-2 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct 09 09:35:51 compute-2 systemd[1]: Starting User Manager for UID 42477...
Oct 09 09:35:51 compute-2 systemd[9018]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:35:51 compute-2 systemd[9018]: Queued start job for default target Main User Target.
Oct 09 09:35:51 compute-2 systemd[9018]: Created slice User Application Slice.
Oct 09 09:35:51 compute-2 systemd[9018]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 09 09:35:51 compute-2 systemd[9018]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 09:35:51 compute-2 systemd[9018]: Reached target Paths.
Oct 09 09:35:51 compute-2 systemd[9018]: Reached target Timers.
Oct 09 09:35:51 compute-2 systemd[9018]: Starting D-Bus User Message Bus Socket...
Oct 09 09:35:51 compute-2 systemd[9018]: Starting Create User's Volatile Files and Directories...
Oct 09 09:35:51 compute-2 systemd[9018]: Finished Create User's Volatile Files and Directories.
Oct 09 09:35:51 compute-2 systemd[9018]: Listening on D-Bus User Message Bus Socket.
Oct 09 09:35:51 compute-2 systemd[9018]: Reached target Sockets.
Oct 09 09:35:51 compute-2 systemd[9018]: Reached target Basic System.
Oct 09 09:35:51 compute-2 systemd[9018]: Reached target Main User Target.
Oct 09 09:35:51 compute-2 systemd[9018]: Startup finished in 97ms.
Oct 09 09:35:51 compute-2 systemd[1]: Started User Manager for UID 42477.
Oct 09 09:35:51 compute-2 systemd[1]: Started Session 18 of User ceph-admin.
Oct 09 09:35:51 compute-2 sshd-session[9014]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:35:51 compute-2 sudo[9034]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:35:51 compute-2 sudo[9034]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:51 compute-2 sudo[9034]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:51 compute-2 sudo[9059]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 09:35:51 compute-2 sudo[9059]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:51 compute-2 ceph-mon[5983]: mgrmap e20: compute-0.lwqgfy(active, starting, since 5s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:35:51 compute-2 ceph-mon[5983]: Active manager daemon compute-0.lwqgfy restarted
Oct 09 09:35:51 compute-2 ceph-mon[5983]: Activating manager daemon compute-0.lwqgfy
Oct 09 09:35:51 compute-2 ceph-mon[5983]: osdmap e27: 3 total, 2 up, 3 in
Oct 09 09:35:51 compute-2 ceph-mon[5983]: mgrmap e21: compute-0.lwqgfy(active, starting, since 0.0130272s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:35:51 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 09 09:35:51 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:35:51 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 09:35:51 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-0.lwqgfy", "id": "compute-0.lwqgfy"}]: dispatch
Oct 09 09:35:51 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-2.takdnm", "id": "compute-2.takdnm"}]: dispatch
Oct 09 09:35:51 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-1.etokpp", "id": "compute-1.etokpp"}]: dispatch
Oct 09 09:35:51 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 09:35:51 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 09:35:51 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:35:51 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 09 09:35:51 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 09:35:51 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 09 09:35:51 compute-2 ceph-mon[5983]: Manager daemon compute-0.lwqgfy is now available
Oct 09 09:35:51 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct 09 09:35:51 compute-2 ceph-mon[5983]: Standby manager daemon compute-2.takdnm restarted
Oct 09 09:35:51 compute-2 ceph-mon[5983]: Standby manager daemon compute-2.takdnm started
Oct 09 09:35:51 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct 09 09:35:51 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e2 new map
Oct 09 09:35:51 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e2 print_map
                                          e2
                                          btime 2025-10-09T09:35:51:790448+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        2
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T09:35:51.790428+0000
                                          modified        2025-10-09T09:35:51.790428+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        
                                          up        {}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        0
                                          qdb_cluster        leader: 0 members: 
                                           
                                           
Oct 09 09:35:51 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e28 e28: 3 total, 2 up, 3 in
Oct 09 09:35:51 compute-2 podman[9140]: 2025-10-09 09:35:51.85947316 +0000 UTC m=+0.045607714 container exec 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct 09 09:35:51 compute-2 podman[9140]: 2025-10-09 09:35:51.933680256 +0000 UTC m=+0.119814810 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:35:52 compute-2 sudo[9059]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:52 compute-2 sudo[9195]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:35:52 compute-2 sudo[9195]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:52 compute-2 sudo[9195]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:52 compute-2 sudo[9220]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:35:52 compute-2 sudo[9220]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:52 compute-2 sudo[9220]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:52 compute-2 sudo[9273]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:35:52 compute-2 sudo[9273]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:52 compute-2 sudo[9273]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:52 compute-2 sudo[9298]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 09 09:35:52 compute-2 sudo[9298]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:52 compute-2 ceph-mon[5983]: mgrmap e22: compute-0.lwqgfy(active, since 1.02912s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct 09 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct 09 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct 09 09:35:52 compute-2 ceph-mon[5983]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct 09 09:35:52 compute-2 ceph-mon[5983]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct 09 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct 09 09:35:52 compute-2 ceph-mon[5983]: osdmap e28: 3 total, 2 up, 3 in
Oct 09 09:35:52 compute-2 ceph-mon[5983]: fsmap cephfs:0
Oct 09 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:35:52 compute-2 ceph-mon[5983]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 09 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:52 compute-2 ceph-mon[5983]: [09/Oct/2025:09:35:52] ENGINE Bus STARTING
Oct 09 09:35:52 compute-2 ceph-mon[5983]: from='client.24205 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 09:35:52 compute-2 ceph-mon[5983]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Oct 09 09:35:52 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:52 compute-2 sudo[9298]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-2 sudo[9339]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 09:35:53 compute-2 sudo[9339]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-2 sudo[9339]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-2 sudo[9364]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph
Oct 09 09:35:53 compute-2 sudo[9364]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-2 sudo[9364]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-2 sudo[9389]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:35:53 compute-2 sudo[9389]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-2 sudo[9389]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-2 sudo[9414]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:53 compute-2 sudo[9414]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-2 sudo[9414]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-2 sudo[9439]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:35:53 compute-2 sudo[9439]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-2 sudo[9439]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-2 sudo[9487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:35:53 compute-2 sudo[9487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-2 sudo[9487]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-2 sudo[9512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:35:53 compute-2 sudo[9512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-2 sudo[9512]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-2 sudo[9537]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 09 09:35:53 compute-2 sudo[9537]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-2 sudo[9537]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-2 sudo[9562]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:35:53 compute-2 sudo[9562]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-2 sudo[9562]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-2 sudo[9587]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:35:53 compute-2 sudo[9587]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-2 sudo[9587]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-2 sudo[9612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:35:53 compute-2 sudo[9612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-2 sudo[9612]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-2 sudo[9637]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:53 compute-2 sudo[9637]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-2 sudo[9637]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-2 sudo[9662]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:35:53 compute-2 sudo[9662]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-2 sudo[9662]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-2 sudo[9710]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:35:53 compute-2 sudo[9710]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-2 sudo[9710]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-2 sudo[9735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:35:53 compute-2 sudo[9735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-2 sudo[9735]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-2 sudo[9760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:35:53 compute-2 sudo[9760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-2 sudo[9760]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-2 sudo[9785]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 09:35:53 compute-2 sudo[9785]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-2 sudo[9785]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-2 sudo[9810]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph
Oct 09 09:35:53 compute-2 sudo[9810]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-2 sudo[9810]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:53 compute-2 ceph-mon[5983]: [09/Oct/2025:09:35:52] ENGINE Serving on http://192.168.122.100:8765
Oct 09 09:35:53 compute-2 ceph-mon[5983]: [09/Oct/2025:09:35:52] ENGINE Serving on https://192.168.122.100:7150
Oct 09 09:35:53 compute-2 ceph-mon[5983]: [09/Oct/2025:09:35:52] ENGINE Bus STARTED
Oct 09 09:35:53 compute-2 ceph-mon[5983]: [09/Oct/2025:09:35:52] ENGINE Client ('192.168.122.100', 36178) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 09 09:35:53 compute-2 ceph-mon[5983]: pgmap v5: 38 pgs: 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 09 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct 09 09:35:53 compute-2 ceph-mon[5983]: Adjusting osd_memory_target on compute-1 to 128.5M
Oct 09 09:35:53 compute-2 ceph-mon[5983]: Unable to set osd_memory_target on compute-1 to 134814105: error parsing value: Value '134814105' is below minimum 939524096
Oct 09 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 09 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:35:53 compute-2 ceph-mon[5983]: Updating compute-0:/etc/ceph/ceph.conf
Oct 09 09:35:53 compute-2 ceph-mon[5983]: Updating compute-1:/etc/ceph/ceph.conf
Oct 09 09:35:53 compute-2 ceph-mon[5983]: Updating compute-2:/etc/ceph/ceph.conf
Oct 09 09:35:53 compute-2 ceph-mon[5983]: from='client.14418 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 09:35:53 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Oct 09 09:35:53 compute-2 ceph-mon[5983]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:35:53 compute-2 ceph-mon[5983]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:35:53 compute-2 ceph-mon[5983]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:35:53 compute-2 ceph-mon[5983]: mgrmap e23: compute-0.lwqgfy(active, since 2s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:35:53 compute-2 sudo[9835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:35:53 compute-2 sudo[9835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:53 compute-2 sudo[9835]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-2 sudo[9860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:54 compute-2 sudo[9860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-2 sudo[9860]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e29 e29: 3 total, 2 up, 3 in
Oct 09 09:35:54 compute-2 sudo[9885]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:35:54 compute-2 sudo[9885]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-2 sudo[9885]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-2 sudo[9933]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:35:54 compute-2 sudo[9933]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-2 sudo[9933]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-2 sudo[9958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:35:54 compute-2 sudo[9958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-2 sudo[9958]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-2 sudo[9983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 09 09:35:54 compute-2 sudo[9983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-2 sudo[9983]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-2 sudo[10008]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:35:54 compute-2 sudo[10008]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-2 sudo[10008]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-2 sudo[10033]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:35:54 compute-2 sudo[10033]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-2 sudo[10033]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:35:54 compute-2 sudo[10058]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:35:54 compute-2 sudo[10058]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-2 sudo[10058]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-2 sudo[10083]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:54 compute-2 sudo[10083]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-2 sudo[10083]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-2 sudo[10108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:35:54 compute-2 sudo[10108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-2 sudo[10108]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-2 sudo[10156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:35:54 compute-2 sudo[10156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-2 sudo[10156]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-2 sudo[10181]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:35:54 compute-2 sudo[10181]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-2 sudo[10181]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:54 compute-2 sudo[10206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:35:54 compute-2 sudo[10206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:54 compute-2 sudo[10206]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:55 compute-2 ceph-mon[5983]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 09 09:35:55 compute-2 ceph-mon[5983]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 09 09:35:55 compute-2 ceph-mon[5983]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 09 09:35:55 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Oct 09 09:35:55 compute-2 ceph-mon[5983]: osdmap e29: 3 total, 2 up, 3 in
Oct 09 09:35:55 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:35:55 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Oct 09 09:35:55 compute-2 ceph-mon[5983]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:35:55 compute-2 ceph-mon[5983]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:35:55 compute-2 ceph-mon[5983]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:35:55 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:55 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:55 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:55 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:55 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:55 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:55 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:55 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e30 e30: 3 total, 2 up, 3 in
Oct 09 09:35:56 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e31 e31: 3 total, 2 up, 3 in
Oct 09 09:35:56 compute-2 ceph-mon[5983]: Deploying daemon node-exporter.compute-1 on compute-1
Oct 09 09:35:56 compute-2 ceph-mon[5983]: pgmap v7: 39 pgs: 1 unknown, 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:56 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Oct 09 09:35:56 compute-2 ceph-mon[5983]: osdmap e30: 3 total, 2 up, 3 in
Oct 09 09:35:56 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:35:56 compute-2 ceph-mon[5983]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 09 09:35:56 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:56 compute-2 ceph-mon[5983]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Oct 09 09:35:56 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:56 compute-2 ceph-mon[5983]: mgrmap e24: compute-0.lwqgfy(active, since 4s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:35:56 compute-2 ceph-mon[5983]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct 09 09:35:56 compute-2 sudo[10231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:35:56 compute-2 sudo[10231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:56 compute-2 sudo[10231]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:57 compute-2 sudo[10256]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/prometheus/node-exporter:v1.7.0 --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:35:57 compute-2 sudo[10256]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:57 compute-2 ceph-mon[5983]: osdmap e31: 3 total, 2 up, 3 in
Oct 09 09:35:57 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:35:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1480014278' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct 09 09:35:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1480014278' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct 09 09:35:57 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:57 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:57 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:35:57 compute-2 systemd[1]: Reloading.
Oct 09 09:35:57 compute-2 systemd-sysv-generator[10339]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:35:57 compute-2 systemd-rc-local-generator[10336]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:35:57 compute-2 systemd[1]: Reloading.
Oct 09 09:35:57 compute-2 systemd-sysv-generator[10380]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:35:57 compute-2 systemd-rc-local-generator[10375]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:35:57 compute-2 systemd[1]: Starting Ceph node-exporter.compute-2 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:35:57 compute-2 bash[10433]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Oct 09 09:35:58 compute-2 ceph-mon[5983]: pgmap v10: 39 pgs: 1 unknown, 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:35:58 compute-2 ceph-mon[5983]: Deploying daemon node-exporter.compute-2 on compute-2
Oct 09 09:35:58 compute-2 ceph-mon[5983]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct 09 09:35:58 compute-2 ceph-mon[5983]: mgrmap e25: compute-0.lwqgfy(active, since 6s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:35:58 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1035192713' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 09 09:35:58 compute-2 bash[10433]: Getting image source signatures
Oct 09 09:35:58 compute-2 bash[10433]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Oct 09 09:35:58 compute-2 bash[10433]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Oct 09 09:35:58 compute-2 bash[10433]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Oct 09 09:35:59 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1636592391' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:35:59 compute-2 bash[10433]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Oct 09 09:35:59 compute-2 bash[10433]: Writing manifest to image destination
Oct 09 09:35:59 compute-2 podman[10433]: 2025-10-09 09:35:59.206069198 +0000 UTC m=+1.254659036 container create 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:35:59 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b844b5135e8869c1009578f3a25cf260daf648a3a2f08915c093974a5f8216f1/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:59 compute-2 podman[10433]: 2025-10-09 09:35:59.244817559 +0000 UTC m=+1.293407387 container init 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:35:59 compute-2 podman[10433]: 2025-10-09 09:35:59.248810805 +0000 UTC m=+1.297400632 container start 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:35:59 compute-2 bash[10433]: 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648
Oct 09 09:35:59 compute-2 podman[10433]: 2025-10-09 09:35:59.196141478 +0000 UTC m=+1.244731316 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.253Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.253Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Oct 09 09:35:59 compute-2 systemd[1]: Started Ceph node-exporter.compute-2 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.254Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.254Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=arp
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=bcache
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=bonding
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=btrfs
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=conntrack
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=cpu
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=cpufreq
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=diskstats
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=dmi
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=edac
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=entropy
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=fibrechannel
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=filefd
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=filesystem
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=hwmon
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=infiniband
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=ipvs
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=loadavg
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=mdadm
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=meminfo
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=netclass
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=netdev
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=netstat
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=nfs
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=nfsd
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=nvme
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=os
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=pressure
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=rapl
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=schedstat
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=selinux
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=sockstat
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=softnet
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=stat
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=tapestats
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=textfile
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=thermal_zone
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=time
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=udp_queues
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=uname
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=vmstat
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=xfs
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.256Z caller=node_exporter.go:117 level=info collector=zfs
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.257Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Oct 09 09:35:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2[10496]: ts=2025-10-09T09:35:59.257Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Oct 09 09:35:59 compute-2 sudo[10256]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:59 compute-2 sudo[10505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:35:59 compute-2 sudo[10505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:59 compute-2 sudo[10505]: pam_unix(sudo:session): session closed for user root
Oct 09 09:35:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:35:59 compute-2 sudo[10530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 --yes --no-systemd
Oct 09 09:35:59 compute-2 sudo[10530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:35:59 compute-2 podman[10587]: 2025-10-09 09:35:59.6910791 +0000 UTC m=+0.028176124 container create fc6c223872c1fdb78f2f8dd51ea3c124731e484995a374b2c5b520d7966d4b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 09 09:35:59 compute-2 systemd[1]: Started libpod-conmon-fc6c223872c1fdb78f2f8dd51ea3c124731e484995a374b2c5b520d7966d4b23.scope.
Oct 09 09:35:59 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:35:59 compute-2 podman[10587]: 2025-10-09 09:35:59.742712569 +0000 UTC m=+0.079809614 container init fc6c223872c1fdb78f2f8dd51ea3c124731e484995a374b2c5b520d7966d4b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shaw, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:35:59 compute-2 podman[10587]: 2025-10-09 09:35:59.747490117 +0000 UTC m=+0.084587143 container start fc6c223872c1fdb78f2f8dd51ea3c124731e484995a374b2c5b520d7966d4b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shaw, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct 09 09:35:59 compute-2 podman[10587]: 2025-10-09 09:35:59.7490579 +0000 UTC m=+0.086154925 container attach fc6c223872c1fdb78f2f8dd51ea3c124731e484995a374b2c5b520d7966d4b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shaw, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:35:59 compute-2 jovial_shaw[10600]: 167 167
Oct 09 09:35:59 compute-2 systemd[1]: libpod-fc6c223872c1fdb78f2f8dd51ea3c124731e484995a374b2c5b520d7966d4b23.scope: Deactivated successfully.
Oct 09 09:35:59 compute-2 conmon[10600]: conmon fc6c223872c1fdb78f2f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc6c223872c1fdb78f2f8dd51ea3c124731e484995a374b2c5b520d7966d4b23.scope/container/memory.events
Oct 09 09:35:59 compute-2 podman[10587]: 2025-10-09 09:35:59.751681077 +0000 UTC m=+0.088778102 container died fc6c223872c1fdb78f2f8dd51ea3c124731e484995a374b2c5b520d7966d4b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:35:59 compute-2 systemd[1]: var-lib-containers-storage-overlay-f6328fab4b5173bb8ad9716f9016672e1827366a6e090d85092a5d346045728e-merged.mount: Deactivated successfully.
Oct 09 09:35:59 compute-2 podman[10587]: 2025-10-09 09:35:59.772590719 +0000 UTC m=+0.109687744 container remove fc6c223872c1fdb78f2f8dd51ea3c124731e484995a374b2c5b520d7966d4b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Oct 09 09:35:59 compute-2 podman[10587]: 2025-10-09 09:35:59.679125973 +0000 UTC m=+0.016223017 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:35:59 compute-2 systemd[1]: libpod-conmon-fc6c223872c1fdb78f2f8dd51ea3c124731e484995a374b2c5b520d7966d4b23.scope: Deactivated successfully.
Oct 09 09:35:59 compute-2 podman[10622]: 2025-10-09 09:35:59.894607298 +0000 UTC m=+0.036719638 container create b2f2b8398632312f68d438c8fdc2a28f2eb58324f110408890ac7a0d5c2c0f2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_merkle, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 09 09:35:59 compute-2 systemd[1]: Started libpod-conmon-b2f2b8398632312f68d438c8fdc2a28f2eb58324f110408890ac7a0d5c2c0f2e.scope.
Oct 09 09:35:59 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:35:59 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49864244327edc44dc6333a514927e07223d06e303c66b2fc7f8dcf159999749/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:59 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49864244327edc44dc6333a514927e07223d06e303c66b2fc7f8dcf159999749/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:59 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49864244327edc44dc6333a514927e07223d06e303c66b2fc7f8dcf159999749/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:59 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49864244327edc44dc6333a514927e07223d06e303c66b2fc7f8dcf159999749/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:59 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49864244327edc44dc6333a514927e07223d06e303c66b2fc7f8dcf159999749/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:35:59 compute-2 podman[10622]: 2025-10-09 09:35:59.958506246 +0000 UTC m=+0.100618596 container init b2f2b8398632312f68d438c8fdc2a28f2eb58324f110408890ac7a0d5c2c0f2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_merkle, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 09 09:35:59 compute-2 podman[10622]: 2025-10-09 09:35:59.964216606 +0000 UTC m=+0.106328936 container start b2f2b8398632312f68d438c8fdc2a28f2eb58324f110408890ac7a0d5c2c0f2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_merkle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:35:59 compute-2 podman[10622]: 2025-10-09 09:35:59.965544856 +0000 UTC m=+0.107657196 container attach b2f2b8398632312f68d438c8fdc2a28f2eb58324f110408890ac7a0d5c2c0f2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:35:59 compute-2 podman[10622]: 2025-10-09 09:35:59.877675122 +0000 UTC m=+0.019787482 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:00 compute-2 ceph-mon[5983]: pgmap v11: 39 pgs: 39 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Oct 09 09:36:00 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1429686175' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct 09 09:36:00 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:00 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:00 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:00 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:00 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:36:00 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:36:00 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:00 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:36:00 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:00 compute-2 elegant_merkle[10635]: --> passed data devices: 0 physical, 1 LVM
Oct 09 09:36:00 compute-2 elegant_merkle[10635]: --> All data devices are unavailable
Oct 09 09:36:00 compute-2 systemd[1]: libpod-b2f2b8398632312f68d438c8fdc2a28f2eb58324f110408890ac7a0d5c2c0f2e.scope: Deactivated successfully.
Oct 09 09:36:00 compute-2 podman[10622]: 2025-10-09 09:36:00.235925543 +0000 UTC m=+0.378037914 container died b2f2b8398632312f68d438c8fdc2a28f2eb58324f110408890ac7a0d5c2c0f2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_merkle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:36:00 compute-2 systemd[1]: var-lib-containers-storage-overlay-49864244327edc44dc6333a514927e07223d06e303c66b2fc7f8dcf159999749-merged.mount: Deactivated successfully.
Oct 09 09:36:00 compute-2 podman[10622]: 2025-10-09 09:36:00.259109573 +0000 UTC m=+0.401221913 container remove b2f2b8398632312f68d438c8fdc2a28f2eb58324f110408890ac7a0d5c2c0f2e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Oct 09 09:36:00 compute-2 systemd[1]: libpod-conmon-b2f2b8398632312f68d438c8fdc2a28f2eb58324f110408890ac7a0d5c2c0f2e.scope: Deactivated successfully.
Oct 09 09:36:00 compute-2 sudo[10530]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:00 compute-2 sudo[10660]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:36:00 compute-2 sudo[10660]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:00 compute-2 sudo[10660]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:00 compute-2 sudo[10685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -- lvm list --format json
Oct 09 09:36:00 compute-2 sudo[10685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:00 compute-2 podman[10740]: 2025-10-09 09:36:00.663310244 +0000 UTC m=+0.027879432 container create 763336aac84886e66461486d4e8acc2b3aac9944134dad3d395b5b7337316cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cerf, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 09 09:36:00 compute-2 systemd[1]: Started libpod-conmon-763336aac84886e66461486d4e8acc2b3aac9944134dad3d395b5b7337316cd0.scope.
Oct 09 09:36:00 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:36:00 compute-2 podman[10740]: 2025-10-09 09:36:00.712665969 +0000 UTC m=+0.077235178 container init 763336aac84886e66461486d4e8acc2b3aac9944134dad3d395b5b7337316cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cerf, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 09 09:36:00 compute-2 podman[10740]: 2025-10-09 09:36:00.71695876 +0000 UTC m=+0.081527950 container start 763336aac84886e66461486d4e8acc2b3aac9944134dad3d395b5b7337316cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cerf, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct 09 09:36:00 compute-2 podman[10740]: 2025-10-09 09:36:00.718007062 +0000 UTC m=+0.082576251 container attach 763336aac84886e66461486d4e8acc2b3aac9944134dad3d395b5b7337316cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:36:00 compute-2 sweet_cerf[10753]: 167 167
Oct 09 09:36:00 compute-2 systemd[1]: libpod-763336aac84886e66461486d4e8acc2b3aac9944134dad3d395b5b7337316cd0.scope: Deactivated successfully.
Oct 09 09:36:00 compute-2 conmon[10753]: conmon 763336aac84886e66461 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-763336aac84886e66461486d4e8acc2b3aac9944134dad3d395b5b7337316cd0.scope/container/memory.events
Oct 09 09:36:00 compute-2 podman[10740]: 2025-10-09 09:36:00.720252815 +0000 UTC m=+0.084822004 container died 763336aac84886e66461486d4e8acc2b3aac9944134dad3d395b5b7337316cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cerf, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Oct 09 09:36:00 compute-2 systemd[1]: var-lib-containers-storage-overlay-57cc32277620f72f082f74c41081d9790162e99d74a05a15d357d814de0465d5-merged.mount: Deactivated successfully.
Oct 09 09:36:00 compute-2 podman[10740]: 2025-10-09 09:36:00.738795054 +0000 UTC m=+0.103364243 container remove 763336aac84886e66461486d4e8acc2b3aac9944134dad3d395b5b7337316cd0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cerf, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Oct 09 09:36:00 compute-2 podman[10740]: 2025-10-09 09:36:00.65197325 +0000 UTC m=+0.016542460 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:00 compute-2 systemd[1]: libpod-conmon-763336aac84886e66461486d4e8acc2b3aac9944134dad3d395b5b7337316cd0.scope: Deactivated successfully.
Oct 09 09:36:00 compute-2 podman[10776]: 2025-10-09 09:36:00.853648027 +0000 UTC m=+0.028610936 container create 788b9ba4ae26b6a294cd853b1569b6ece2709bc61fd81a9a4d78f224ce51d497 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_leakey, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:36:00 compute-2 systemd[1]: Started libpod-conmon-788b9ba4ae26b6a294cd853b1569b6ece2709bc61fd81a9a4d78f224ce51d497.scope.
Oct 09 09:36:00 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:36:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a32ccfd587a1fff4c5849284a31ce4a2159fae72ae6a645799a30bd716c666/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a32ccfd587a1fff4c5849284a31ce4a2159fae72ae6a645799a30bd716c666/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a32ccfd587a1fff4c5849284a31ce4a2159fae72ae6a645799a30bd716c666/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45a32ccfd587a1fff4c5849284a31ce4a2159fae72ae6a645799a30bd716c666/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:00 compute-2 podman[10776]: 2025-10-09 09:36:00.915975675 +0000 UTC m=+0.090938604 container init 788b9ba4ae26b6a294cd853b1569b6ece2709bc61fd81a9a4d78f224ce51d497 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_leakey, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:36:00 compute-2 podman[10776]: 2025-10-09 09:36:00.920870043 +0000 UTC m=+0.095832962 container start 788b9ba4ae26b6a294cd853b1569b6ece2709bc61fd81a9a4d78f224ce51d497 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 09 09:36:00 compute-2 podman[10776]: 2025-10-09 09:36:00.922040635 +0000 UTC m=+0.097003544 container attach 788b9ba4ae26b6a294cd853b1569b6ece2709bc61fd81a9a4d78f224ce51d497 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_leakey, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:36:00 compute-2 podman[10776]: 2025-10-09 09:36:00.841188322 +0000 UTC m=+0.016151261 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]: {
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:     "2": [
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:         {
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:             "devices": [
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:                 "/dev/loop3"
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:             ],
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:             "lv_name": "ceph_lv0",
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:             "lv_size": "21470642176",
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=Q9Wtal-P8YX-5ARY-hdyd-7Mzk-oL5W-zIskol,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=286f8bf0-da72-5823-9a4e-ac4457d9e609,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=0493bfe4-e28c-49f6-8185-a07f1e80a32f,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:             "lv_uuid": "Q9Wtal-P8YX-5ARY-hdyd-7Mzk-oL5W-zIskol",
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:             "name": "ceph_lv0",
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:             "path": "/dev/ceph_vg0/ceph_lv0",
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:             "tags": {
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:                 "ceph.block_uuid": "Q9Wtal-P8YX-5ARY-hdyd-7Mzk-oL5W-zIskol",
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:                 "ceph.cephx_lockbox_secret": "",
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:                 "ceph.cluster_fsid": "286f8bf0-da72-5823-9a4e-ac4457d9e609",
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:                 "ceph.cluster_name": "ceph",
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:                 "ceph.crush_device_class": "",
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:                 "ceph.encrypted": "0",
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:                 "ceph.osd_fsid": "0493bfe4-e28c-49f6-8185-a07f1e80a32f",
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:                 "ceph.osd_id": "2",
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:                 "ceph.osdspec_affinity": "default_drive_group",
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:                 "ceph.type": "block",
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:                 "ceph.vdo": "0",
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:                 "ceph.with_tpm": "0"
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:             },
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:             "type": "block",
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:             "vg_name": "ceph_vg0"
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:         }
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]:     ]
Oct 09 09:36:01 compute-2 beautiful_leakey[10789]: }
Oct 09 09:36:01 compute-2 systemd[1]: libpod-788b9ba4ae26b6a294cd853b1569b6ece2709bc61fd81a9a4d78f224ce51d497.scope: Deactivated successfully.
Oct 09 09:36:01 compute-2 podman[10776]: 2025-10-09 09:36:01.149770767 +0000 UTC m=+0.324733696 container died 788b9ba4ae26b6a294cd853b1569b6ece2709bc61fd81a9a4d78f224ce51d497 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_leakey, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 09 09:36:01 compute-2 systemd[1]: var-lib-containers-storage-overlay-45a32ccfd587a1fff4c5849284a31ce4a2159fae72ae6a645799a30bd716c666-merged.mount: Deactivated successfully.
Oct 09 09:36:01 compute-2 podman[10776]: 2025-10-09 09:36:01.169090936 +0000 UTC m=+0.344053845 container remove 788b9ba4ae26b6a294cd853b1569b6ece2709bc61fd81a9a4d78f224ce51d497 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=beautiful_leakey, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:36:01 compute-2 systemd[1]: libpod-conmon-788b9ba4ae26b6a294cd853b1569b6ece2709bc61fd81a9a4d78f224ce51d497.scope: Deactivated successfully.
Oct 09 09:36:01 compute-2 sudo[10685]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:01 compute-2 sudo[10808]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:36:01 compute-2 sudo[10808]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:01 compute-2 sudo[10808]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:01 compute-2 sudo[10833]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:36:01 compute-2 sudo[10833]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:01 compute-2 podman[10892]: 2025-10-09 09:36:01.586620263 +0000 UTC m=+0.028552655 container create c9bb1791634692fec2e9d88f0040b6535a2c731572e88bd4f0c714692f9c35a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:36:01 compute-2 systemd[1]: Started libpod-conmon-c9bb1791634692fec2e9d88f0040b6535a2c731572e88bd4f0c714692f9c35a9.scope.
Oct 09 09:36:01 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:36:01 compute-2 podman[10892]: 2025-10-09 09:36:01.647722692 +0000 UTC m=+0.089655094 container init c9bb1791634692fec2e9d88f0040b6535a2c731572e88bd4f0c714692f9c35a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_wright, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 09:36:01 compute-2 podman[10892]: 2025-10-09 09:36:01.652546449 +0000 UTC m=+0.094478831 container start c9bb1791634692fec2e9d88f0040b6535a2c731572e88bd4f0c714692f9c35a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_wright, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:36:01 compute-2 podman[10892]: 2025-10-09 09:36:01.654158589 +0000 UTC m=+0.096090971 container attach c9bb1791634692fec2e9d88f0040b6535a2c731572e88bd4f0c714692f9c35a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_wright, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid)
Oct 09 09:36:01 compute-2 quirky_wright[10905]: 167 167
Oct 09 09:36:01 compute-2 systemd[1]: libpod-c9bb1791634692fec2e9d88f0040b6535a2c731572e88bd4f0c714692f9c35a9.scope: Deactivated successfully.
Oct 09 09:36:01 compute-2 podman[10892]: 2025-10-09 09:36:01.65646553 +0000 UTC m=+0.098397912 container died c9bb1791634692fec2e9d88f0040b6535a2c731572e88bd4f0c714692f9c35a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 09 09:36:01 compute-2 podman[10892]: 2025-10-09 09:36:01.575438506 +0000 UTC m=+0.017370909 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:01 compute-2 systemd[1]: var-lib-containers-storage-overlay-88655573ef8ff3ee20a22371d1138ecdbfc4f2cbf5e07069c0345870becff033-merged.mount: Deactivated successfully.
Oct 09 09:36:01 compute-2 podman[10892]: 2025-10-09 09:36:01.680315516 +0000 UTC m=+0.122247898 container remove c9bb1791634692fec2e9d88f0040b6535a2c731572e88bd4f0c714692f9c35a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_wright, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Oct 09 09:36:01 compute-2 systemd[1]: libpod-conmon-c9bb1791634692fec2e9d88f0040b6535a2c731572e88bd4f0c714692f9c35a9.scope: Deactivated successfully.
Oct 09 09:36:01 compute-2 ceph-mon[5983]: pgmap v12: 39 pgs: 39 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail; 26 KiB/s rd, 0 B/s wr, 10 op/s
Oct 09 09:36:01 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:01 compute-2 ceph-mon[5983]: from='client.24245 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 09 09:36:01 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct 09 09:36:01 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:01 compute-2 ceph-mon[5983]: Deploying daemon osd.2 on compute-2
Oct 09 09:36:01 compute-2 podman[10933]: 2025-10-09 09:36:01.858882 +0000 UTC m=+0.028326569 container create 37341d017c6b7051b5ff37220d9d4d2cc8ed031d5803c827e542061d4f17e798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate-test, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:36:01 compute-2 systemd[1]: Started libpod-conmon-37341d017c6b7051b5ff37220d9d4d2cc8ed031d5803c827e542061d4f17e798.scope.
Oct 09 09:36:01 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:36:01 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6fa609d85484b6e6dc372f31aaf091844b23c1c17d8ec670223419d38a31ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:01 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6fa609d85484b6e6dc372f31aaf091844b23c1c17d8ec670223419d38a31ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:01 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6fa609d85484b6e6dc372f31aaf091844b23c1c17d8ec670223419d38a31ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:01 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6fa609d85484b6e6dc372f31aaf091844b23c1c17d8ec670223419d38a31ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:01 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d6fa609d85484b6e6dc372f31aaf091844b23c1c17d8ec670223419d38a31ed/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:01 compute-2 podman[10933]: 2025-10-09 09:36:01.913560515 +0000 UTC m=+0.083005094 container init 37341d017c6b7051b5ff37220d9d4d2cc8ed031d5803c827e542061d4f17e798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:36:01 compute-2 podman[10933]: 2025-10-09 09:36:01.923621437 +0000 UTC m=+0.093066007 container start 37341d017c6b7051b5ff37220d9d4d2cc8ed031d5803c827e542061d4f17e798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate-test, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 09 09:36:01 compute-2 podman[10933]: 2025-10-09 09:36:01.924927591 +0000 UTC m=+0.094372160 container attach 37341d017c6b7051b5ff37220d9d4d2cc8ed031d5803c827e542061d4f17e798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 09:36:01 compute-2 podman[10933]: 2025-10-09 09:36:01.84829726 +0000 UTC m=+0.017741848 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate-test[10946]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Oct 09 09:36:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate-test[10946]:                             [--no-systemd] [--no-tmpfs]
Oct 09 09:36:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate-test[10946]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct 09 09:36:02 compute-2 podman[10933]: 2025-10-09 09:36:02.076339888 +0000 UTC m=+0.245784467 container died 37341d017c6b7051b5ff37220d9d4d2cc8ed031d5803c827e542061d4f17e798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:36:02 compute-2 systemd[1]: libpod-37341d017c6b7051b5ff37220d9d4d2cc8ed031d5803c827e542061d4f17e798.scope: Deactivated successfully.
Oct 09 09:36:02 compute-2 systemd[1]: var-lib-containers-storage-overlay-7d6fa609d85484b6e6dc372f31aaf091844b23c1c17d8ec670223419d38a31ed-merged.mount: Deactivated successfully.
Oct 09 09:36:02 compute-2 podman[10933]: 2025-10-09 09:36:02.103903748 +0000 UTC m=+0.273348316 container remove 37341d017c6b7051b5ff37220d9d4d2cc8ed031d5803c827e542061d4f17e798 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate-test, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Oct 09 09:36:02 compute-2 systemd[1]: libpod-conmon-37341d017c6b7051b5ff37220d9d4d2cc8ed031d5803c827e542061d4f17e798.scope: Deactivated successfully.
Oct 09 09:36:02 compute-2 systemd[1]: Reloading.
Oct 09 09:36:02 compute-2 systemd-rc-local-generator[11000]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:36:02 compute-2 systemd-sysv-generator[11004]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:36:02 compute-2 systemd[1]: Reloading.
Oct 09 09:36:02 compute-2 systemd-sysv-generator[11048]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:36:02 compute-2 systemd-rc-local-generator[11043]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:36:02 compute-2 systemd[1]: Starting Ceph osd.2 for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:36:02 compute-2 podman[11096]: 2025-10-09 09:36:02.864901312 +0000 UTC m=+0.031767579 container create 01e6f505a362384abebf891d0c4ceb8e54c32caf3d52d95c73c7c439ae759852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:36:02 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:36:02 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/992d9261a0b8836b52cac6f3437bdf71a4b0b2d505dd93db922cbad56f1ad136/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:02 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/992d9261a0b8836b52cac6f3437bdf71a4b0b2d505dd93db922cbad56f1ad136/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:02 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/992d9261a0b8836b52cac6f3437bdf71a4b0b2d505dd93db922cbad56f1ad136/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:02 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/992d9261a0b8836b52cac6f3437bdf71a4b0b2d505dd93db922cbad56f1ad136/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:02 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/992d9261a0b8836b52cac6f3437bdf71a4b0b2d505dd93db922cbad56f1ad136/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:02 compute-2 podman[11096]: 2025-10-09 09:36:02.906569742 +0000 UTC m=+0.073436019 container init 01e6f505a362384abebf891d0c4ceb8e54c32caf3d52d95c73c7c439ae759852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 09 09:36:02 compute-2 podman[11096]: 2025-10-09 09:36:02.911756723 +0000 UTC m=+0.078622990 container start 01e6f505a362384abebf891d0c4ceb8e54c32caf3d52d95c73c7c439ae759852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:36:02 compute-2 podman[11096]: 2025-10-09 09:36:02.913268515 +0000 UTC m=+0.080134782 container attach 01e6f505a362384abebf891d0c4ceb8e54c32caf3d52d95c73c7c439ae759852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Oct 09 09:36:02 compute-2 podman[11096]: 2025-10-09 09:36:02.84986665 +0000 UTC m=+0.016732938 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 09:36:03 compute-2 bash[11096]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 09:36:03 compute-2 bash[11096]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 09:36:03 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:03 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:03 compute-2 lvm[11189]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 09:36:03 compute-2 lvm[11189]: VG ceph_vg0 finished
Oct 09 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct 09 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 09:36:03 compute-2 bash[11096]: --> Failed to activate via raw: did not find any matching OSD to activate
Oct 09 09:36:03 compute-2 bash[11096]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 09:36:03 compute-2 bash[11096]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct 09 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 09 09:36:03 compute-2 bash[11096]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 09 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct 09 09:36:03 compute-2 bash[11096]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct 09 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Oct 09 09:36:03 compute-2 bash[11096]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-2/block
Oct 09 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct 09 09:36:03 compute-2 bash[11096]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct 09 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 09 09:36:03 compute-2 bash[11096]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct 09 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 09 09:36:03 compute-2 bash[11096]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct 09 09:36:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate[11107]: --> ceph-volume lvm activate successful for osd ID: 2
Oct 09 09:36:03 compute-2 bash[11096]: --> ceph-volume lvm activate successful for osd ID: 2
Oct 09 09:36:03 compute-2 systemd[1]: libpod-01e6f505a362384abebf891d0c4ceb8e54c32caf3d52d95c73c7c439ae759852.scope: Deactivated successfully.
Oct 09 09:36:03 compute-2 podman[11096]: 2025-10-09 09:36:03.878771289 +0000 UTC m=+1.045637557 container died 01e6f505a362384abebf891d0c4ceb8e54c32caf3d52d95c73c7c439ae759852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 09 09:36:03 compute-2 systemd[1]: var-lib-containers-storage-overlay-992d9261a0b8836b52cac6f3437bdf71a4b0b2d505dd93db922cbad56f1ad136-merged.mount: Deactivated successfully.
Oct 09 09:36:03 compute-2 podman[11096]: 2025-10-09 09:36:03.901857185 +0000 UTC m=+1.068723452 container remove 01e6f505a362384abebf891d0c4ceb8e54c32caf3d52d95c73c7c439ae759852 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 09 09:36:04 compute-2 podman[11331]: 2025-10-09 09:36:04.038969996 +0000 UTC m=+0.028044928 container create c6fd36dc28e613d9ba2027a53df395595913ffe35e6af0dafbfe59f7a969b000 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:36:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fcf942ea6486a67c1f6ed712fd348ca0c751e21c3bc2dc0986fbd2d47186d9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fcf942ea6486a67c1f6ed712fd348ca0c751e21c3bc2dc0986fbd2d47186d9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fcf942ea6486a67c1f6ed712fd348ca0c751e21c3bc2dc0986fbd2d47186d9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fcf942ea6486a67c1f6ed712fd348ca0c751e21c3bc2dc0986fbd2d47186d9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15fcf942ea6486a67c1f6ed712fd348ca0c751e21c3bc2dc0986fbd2d47186d9/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:04 compute-2 podman[11331]: 2025-10-09 09:36:04.075452434 +0000 UTC m=+0.064527386 container init c6fd36dc28e613d9ba2027a53df395595913ffe35e6af0dafbfe59f7a969b000 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Oct 09 09:36:04 compute-2 podman[11331]: 2025-10-09 09:36:04.080236798 +0000 UTC m=+0.069311740 container start c6fd36dc28e613d9ba2027a53df395595913ffe35e6af0dafbfe59f7a969b000 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Oct 09 09:36:04 compute-2 bash[11331]: c6fd36dc28e613d9ba2027a53df395595913ffe35e6af0dafbfe59f7a969b000
Oct 09 09:36:04 compute-2 podman[11331]: 2025-10-09 09:36:04.028071163 +0000 UTC m=+0.017146115 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:04 compute-2 systemd[1]: Started Ceph osd.2 for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:36:04 compute-2 ceph-osd[11347]: set uid:gid to 167:167 (ceph:ceph)
Oct 09 09:36:04 compute-2 ceph-osd[11347]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Oct 09 09:36:04 compute-2 ceph-osd[11347]: pidfile_write: ignore empty --pid-file
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:36:04 compute-2 sudo[10833]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 09:36:04 compute-2 sudo[11359]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:36:04 compute-2 sudo[11359]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:04 compute-2 sudo[11359]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:04 compute-2 sudo[11384]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -- raw list --format json
Oct 09 09:36:04 compute-2 sudo[11384]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:04 compute-2 ceph-mon[5983]: from='client.24251 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 09 09:36:04 compute-2 ceph-mon[5983]: pgmap v13: 39 pgs: 39 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Oct 09 09:36:04 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:04 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 09:36:04 compute-2 podman[11445]: 2025-10-09 09:36:04.492698747 +0000 UTC m=+0.025939217 container create ec6eb38ce32f3b5babf4bb87c2513f38a9e5bb7841ba275230fd89ae6a8a6005 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 09 09:36:04 compute-2 systemd[1]: Started libpod-conmon-ec6eb38ce32f3b5babf4bb87c2513f38a9e5bb7841ba275230fd89ae6a8a6005.scope.
Oct 09 09:36:04 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:36:04 compute-2 podman[11445]: 2025-10-09 09:36:04.545262593 +0000 UTC m=+0.078503074 container init ec6eb38ce32f3b5babf4bb87c2513f38a9e5bb7841ba275230fd89ae6a8a6005 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Oct 09 09:36:04 compute-2 podman[11445]: 2025-10-09 09:36:04.549619439 +0000 UTC m=+0.082859910 container start ec6eb38ce32f3b5babf4bb87c2513f38a9e5bb7841ba275230fd89ae6a8a6005 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:36:04 compute-2 podman[11445]: 2025-10-09 09:36:04.550570543 +0000 UTC m=+0.083811014 container attach ec6eb38ce32f3b5babf4bb87c2513f38a9e5bb7841ba275230fd89ae6a8a6005 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:36:04 compute-2 dazzling_goldberg[11458]: 167 167
Oct 09 09:36:04 compute-2 systemd[1]: libpod-ec6eb38ce32f3b5babf4bb87c2513f38a9e5bb7841ba275230fd89ae6a8a6005.scope: Deactivated successfully.
Oct 09 09:36:04 compute-2 podman[11445]: 2025-10-09 09:36:04.553531326 +0000 UTC m=+0.086771788 container died ec6eb38ce32f3b5babf4bb87c2513f38a9e5bb7841ba275230fd89ae6a8a6005 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct 09 09:36:04 compute-2 systemd[1]: var-lib-containers-storage-overlay-114603ded65df7b6a3274e4fb53b85326f2a748227019c21abc721975f9576fa-merged.mount: Deactivated successfully.
Oct 09 09:36:04 compute-2 podman[11445]: 2025-10-09 09:36:04.571668018 +0000 UTC m=+0.104908489 container remove ec6eb38ce32f3b5babf4bb87c2513f38a9e5bb7841ba275230fd89ae6a8a6005 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_goldberg, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Oct 09 09:36:04 compute-2 podman[11445]: 2025-10-09 09:36:04.482085372 +0000 UTC m=+0.015325863 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:04 compute-2 systemd[1]: libpod-conmon-ec6eb38ce32f3b5babf4bb87c2513f38a9e5bb7841ba275230fd89ae6a8a6005.scope: Deactivated successfully.
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 09:36:04 compute-2 podman[11479]: 2025-10-09 09:36:04.685114609 +0000 UTC m=+0.027497367 container create abb354380ac9818a55dd2be77f38c2bc78055bc6f23cf1604e99bb5511f45338 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_knuth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5800 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 09:36:04 compute-2 systemd[1]: Started libpod-conmon-abb354380ac9818a55dd2be77f38c2bc78055bc6f23cf1604e99bb5511f45338.scope.
Oct 09 09:36:04 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:36:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146a7a94babfc722a617f00a5cb605a690ac56ea075cd7bef95ae0fe560d2495/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146a7a94babfc722a617f00a5cb605a690ac56ea075cd7bef95ae0fe560d2495/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146a7a94babfc722a617f00a5cb605a690ac56ea075cd7bef95ae0fe560d2495/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:04 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/146a7a94babfc722a617f00a5cb605a690ac56ea075cd7bef95ae0fe560d2495/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:04 compute-2 podman[11479]: 2025-10-09 09:36:04.734005037 +0000 UTC m=+0.076387805 container init abb354380ac9818a55dd2be77f38c2bc78055bc6f23cf1604e99bb5511f45338 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_knuth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:36:04 compute-2 podman[11479]: 2025-10-09 09:36:04.74202965 +0000 UTC m=+0.084412408 container start abb354380ac9818a55dd2be77f38c2bc78055bc6f23cf1604e99bb5511f45338 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_knuth, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:36:04 compute-2 podman[11479]: 2025-10-09 09:36:04.744788063 +0000 UTC m=+0.087170820 container attach abb354380ac9818a55dd2be77f38c2bc78055bc6f23cf1604e99bb5511f45338 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_knuth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:36:04 compute-2 podman[11479]: 2025-10-09 09:36:04.673266085 +0000 UTC m=+0.015648863 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:04 compute-2 ceph-osd[11347]: bdev(0x55bdd34f5c00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 09:36:05 compute-2 lvm[11576]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 09:36:05 compute-2 lvm[11576]: VG ceph_vg0 finished
Oct 09 09:36:05 compute-2 ceph-osd[11347]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Oct 09 09:36:05 compute-2 ceph-osd[11347]: load: jerasure load: lrc 
Oct 09 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:36:05 compute-2 mystifying_knuth[11500]: {}
Oct 09 09:36:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 09 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 09:36:05 compute-2 podman[11479]: 2025-10-09 09:36:05.253032971 +0000 UTC m=+0.595415730 container died abb354380ac9818a55dd2be77f38c2bc78055bc6f23cf1604e99bb5511f45338 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:36:05 compute-2 systemd[1]: libpod-abb354380ac9818a55dd2be77f38c2bc78055bc6f23cf1604e99bb5511f45338.scope: Deactivated successfully.
Oct 09 09:36:05 compute-2 systemd[1]: var-lib-containers-storage-overlay-146a7a94babfc722a617f00a5cb605a690ac56ea075cd7bef95ae0fe560d2495-merged.mount: Deactivated successfully.
Oct 09 09:36:05 compute-2 podman[11479]: 2025-10-09 09:36:05.276725521 +0000 UTC m=+0.619108278 container remove abb354380ac9818a55dd2be77f38c2bc78055bc6f23cf1604e99bb5511f45338 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=mystifying_knuth, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct 09 09:36:05 compute-2 ceph-mon[5983]: from='client.24257 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 09 09:36:05 compute-2 systemd[1]: libpod-conmon-abb354380ac9818a55dd2be77f38c2bc78055bc6f23cf1604e99bb5511f45338.scope: Deactivated successfully.
Oct 09 09:36:05 compute-2 sudo[11384]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:05 compute-2 sudo[11592]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:36:05 compute-2 sudo[11592]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:05 compute-2 sudo[11592]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:05 compute-2 sudo[11617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:36:05 compute-2 sudo[11617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:36:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 09 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 09:36:05 compute-2 podman[11681]: 2025-10-09 09:36:05.69718869 +0000 UTC m=+0.025233838 container create 275eef3e6972c9c7731099a7fe798a648b4a58b783b49377bf6bbad75e49ecac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:36:05 compute-2 systemd[1]: Started libpod-conmon-275eef3e6972c9c7731099a7fe798a648b4a58b783b49377bf6bbad75e49ecac.scope.
Oct 09 09:36:05 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:36:05 compute-2 podman[11681]: 2025-10-09 09:36:05.739687945 +0000 UTC m=+0.067733103 container init 275eef3e6972c9c7731099a7fe798a648b4a58b783b49377bf6bbad75e49ecac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 09 09:36:05 compute-2 podman[11681]: 2025-10-09 09:36:05.744572687 +0000 UTC m=+0.072617836 container start 275eef3e6972c9c7731099a7fe798a648b4a58b783b49377bf6bbad75e49ecac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_pascal, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct 09 09:36:05 compute-2 podman[11681]: 2025-10-09 09:36:05.745893939 +0000 UTC m=+0.073939107 container attach 275eef3e6972c9c7731099a7fe798a648b4a58b783b49377bf6bbad75e49ecac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct 09 09:36:05 compute-2 ecstatic_pascal[11694]: 167 167
Oct 09 09:36:05 compute-2 systemd[1]: libpod-275eef3e6972c9c7731099a7fe798a648b4a58b783b49377bf6bbad75e49ecac.scope: Deactivated successfully.
Oct 09 09:36:05 compute-2 podman[11681]: 2025-10-09 09:36:05.748294125 +0000 UTC m=+0.076339273 container died 275eef3e6972c9c7731099a7fe798a648b4a58b783b49377bf6bbad75e49ecac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_pascal, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Oct 09 09:36:05 compute-2 systemd[1]: var-lib-containers-storage-overlay-52c2dddcdc46689bb0e0cd1f4c4b6de91cb29428d22f72cebd7a3332179f683e-merged.mount: Deactivated successfully.
Oct 09 09:36:05 compute-2 podman[11681]: 2025-10-09 09:36:05.766647286 +0000 UTC m=+0.094692434 container remove 275eef3e6972c9c7731099a7fe798a648b4a58b783b49377bf6bbad75e49ecac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 09 09:36:05 compute-2 podman[11681]: 2025-10-09 09:36:05.687258362 +0000 UTC m=+0.015303530 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:05 compute-2 ceph-osd[11347]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct 09 09:36:05 compute-2 ceph-osd[11347]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct 09 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 09:36:05 compute-2 systemd[1]: libpod-conmon-275eef3e6972c9c7731099a7fe798a648b4a58b783b49377bf6bbad75e49ecac.scope: Deactivated successfully.
Oct 09 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:36:05 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 09:36:05 compute-2 systemd[1]: Reloading.
Oct 09 09:36:05 compute-2 systemd-rc-local-generator[11741]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:36:05 compute-2 systemd-sysv-generator[11744]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:36:06 compute-2 systemd[1]: Reloading.
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 09:36:06 compute-2 systemd-rc-local-generator[11789]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:36:06 compute-2 systemd-sysv-generator[11792]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:36:06 compute-2 systemd[1]: Starting Ceph rgw.rgw.compute-2.mbbcec for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436ac00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436b000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436b000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436b000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluefs mount
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluefs mount shared_bdev_used = 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 09 09:36:06 compute-2 ceph-mon[5983]: pgmap v14: 39 pgs: 39 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 18 KiB/s rd, 0 B/s wr, 7 op/s
Oct 09 09:36:06 compute-2 ceph-mon[5983]: from='client.14469 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct 09 09:36:06 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:06 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:06 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mbbcec", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 09 09:36:06 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.mbbcec", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 09 09:36:06 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:06 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:06 compute-2 ceph-mon[5983]: Deploying daemon rgw.rgw.compute-2.mbbcec on compute-2
Oct 09 09:36:06 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:06 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2036627890' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: RocksDB version: 7.9.2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Git sha 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: DB SUMMARY
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: DB Session ID:  70IKZGC0PAQBQGU5NYTU
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: CURRENT file:  CURRENT
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: IDENTITY file:  IDENTITY
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                         Options.error_if_exists: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.create_if_missing: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                         Options.paranoid_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                                     Options.env: 0x55bdd3549650
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                                Options.info_log: 0x55bdd436f580
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_file_opening_threads: 16
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                              Options.statistics: (nil)
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.use_fsync: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.max_log_file_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                         Options.allow_fallocate: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.use_direct_reads: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.create_missing_column_families: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                              Options.db_log_dir: 
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                                 Options.wal_dir: db.wal
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.advise_random_on_open: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.write_buffer_manager: 0x55bdd4462a00
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                            Options.rate_limiter: (nil)
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.unordered_write: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.row_cache: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                              Options.wal_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.allow_ingest_behind: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.two_write_queues: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.manual_wal_flush: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.wal_compression: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.atomic_flush: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.log_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.allow_data_in_errors: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.db_host_id: __hostname__
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.max_background_jobs: 4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.max_background_compactions: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.max_subcompactions: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.max_open_files: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.bytes_per_sync: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.max_background_flushes: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Compression algorithms supported:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         kZSTD supported: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         kXpressCompression supported: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         kBZip2Compression supported: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         kLZ4Compression supported: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         kZlibCompression supported: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         kLZ4HCCompression supported: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         kSnappyCompression supported: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd436f940)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdd358b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd436f940)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdd358b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd436f940)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdd358b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd436f940)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdd358b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd436f940)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdd358b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd436f940)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdd358b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd436f940)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdd358b350
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd436f960)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdd358a9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd436f960)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdd358a9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd436f960)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdd358a9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7b80159d-2ad0-4081-a2fe-760c1c44de54
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002566349086, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002566349264, "job": 1, "event": "recovery_finished"}
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: freelist init
Oct 09 09:36:06 compute-2 ceph-osd[11347]: freelist _read_cfg
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluefs umount
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436b000 /var/lib/ceph/osd/ceph-2/block) close
Oct 09 09:36:06 compute-2 podman[11980]: 2025-10-09 09:36:06.381922504 +0000 UTC m=+0.027410182 container create df47085309c18c23760309ab65395716ed79e6d3f6a375f0ac57262a3b82f849 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-rgw-rgw-compute-2-mbbcec, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Oct 09 09:36:06 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e81e854b124e463361df86bef452073265f2c6367ce77c1fdebdebb29e434f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:06 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e81e854b124e463361df86bef452073265f2c6367ce77c1fdebdebb29e434f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:06 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e81e854b124e463361df86bef452073265f2c6367ce77c1fdebdebb29e434f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:06 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a6e81e854b124e463361df86bef452073265f2c6367ce77c1fdebdebb29e434f/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-2.mbbcec supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:06 compute-2 podman[11980]: 2025-10-09 09:36:06.420775644 +0000 UTC m=+0.066263341 container init df47085309c18c23760309ab65395716ed79e6d3f6a375f0ac57262a3b82f849 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-rgw-rgw-compute-2-mbbcec, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:36:06 compute-2 podman[11980]: 2025-10-09 09:36:06.424697501 +0000 UTC m=+0.070185178 container start df47085309c18c23760309ab65395716ed79e6d3f6a375f0ac57262a3b82f849 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-rgw-rgw-compute-2-mbbcec, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct 09 09:36:06 compute-2 bash[11980]: df47085309c18c23760309ab65395716ed79e6d3f6a375f0ac57262a3b82f849
Oct 09 09:36:06 compute-2 podman[11980]: 2025-10-09 09:36:06.370976783 +0000 UTC m=+0.016464480 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:06 compute-2 systemd[1]: Started Ceph rgw.rgw.compute-2.mbbcec for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:36:06 compute-2 sudo[11617]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:06 compute-2 radosgw[12043]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct 09 09:36:06 compute-2 radosgw[12043]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Oct 09 09:36:06 compute-2 radosgw[12043]: framework: beast
Oct 09 09:36:06 compute-2 radosgw[12043]: framework conf key: endpoint, val: 192.168.122.102:8082
Oct 09 09:36:06 compute-2 radosgw[12043]: init_numa not setting numa affinity
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436b000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436b000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bdev(0x55bdd436b000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluefs mount
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluefs mount shared_bdev_used = 4718592
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: RocksDB version: 7.9.2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Git sha 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Compile date 2025-07-17 03:12:14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: DB SUMMARY
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: DB Session ID:  70IKZGC0PAQBQGU5NYTV
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: CURRENT file:  CURRENT
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: IDENTITY file:  IDENTITY
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                         Options.error_if_exists: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.create_if_missing: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                         Options.paranoid_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                                     Options.env: 0x55bdd35493b0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                                Options.info_log: 0x55bdd436fe80
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_file_opening_threads: 16
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                              Options.statistics: (nil)
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.use_fsync: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.max_log_file_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.keep_log_file_num: 1000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.recycle_log_file_num: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                         Options.allow_fallocate: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.allow_mmap_reads: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.allow_mmap_writes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.use_direct_reads: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.create_missing_column_families: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                              Options.db_log_dir: 
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                                 Options.wal_dir: db.wal
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.table_cache_numshardbits: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.advise_random_on_open: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.db_write_buffer_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.write_buffer_manager: 0x55bdd4462aa0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                            Options.rate_limiter: (nil)
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.wal_recovery_mode: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.enable_thread_tracking: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.enable_pipelined_write: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.unordered_write: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.row_cache: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                              Options.wal_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.allow_ingest_behind: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.two_write_queues: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.manual_wal_flush: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.wal_compression: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.atomic_flush: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.log_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.best_efforts_recovery: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.allow_data_in_errors: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.db_host_id: __hostname__
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.enforce_single_del_contracts: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.max_background_jobs: 4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.max_background_compactions: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.max_subcompactions: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.delayed_write_rate : 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.max_open_files: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.bytes_per_sync: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.max_background_flushes: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Compression algorithms supported:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         kZSTD supported: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         kXpressCompression supported: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         kBZip2Compression supported: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         kZSTDNotFinalCompression supported: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         kLZ4Compression supported: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         kZlibCompression supported: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         kLZ4HCCompression supported: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         kSnappyCompression supported: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Fast CRC32 supported: Supported on x86
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: DMutex implementation: pthread_mutex_t
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd46ca3a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdd358a9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd46ca3a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdd358a9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd46ca3a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdd358a9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd46ca3a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdd358a9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd46ca3a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdd358a9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd46ca3a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdd358a9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd46ca3a0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdd358a9b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd46ca020)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdd358a590
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd46ca020)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdd358a590
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:           Options.merge_operator: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.compaction_filter_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.sst_partitioner_factory: None
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.table_factory: BlockBasedTable
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55bdd46ca020)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55bdd358a590
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.write_buffer_size: 16777216
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.max_write_buffer_number: 64
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.compression: LZ4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression: Disabled
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.num_levels: 7
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:            Options.compression_opts.window_bits: -14
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.level: 32767
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.compression_opts.strategy: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                  Options.compression_opts.enabled: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.target_file_size_base: 67108864
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:             Options.target_file_size_multiplier: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.arena_block_size: 1048576
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.disable_auto_compactions: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.inplace_update_support: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:   Options.memtable_huge_page_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.bloom_locality: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                    Options.max_successive_merges: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.paranoid_file_checks: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.force_consistency_checks: 1
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.report_bg_io_stats: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                               Options.ttl: 2592000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                       Options.enable_blob_files: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                           Options.min_blob_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                          Options.blob_file_size: 268435456
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb:                Options.blob_file_starting_level: 0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 7b80159d-2ad0-4081-a2fe-760c1c44de54
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002566629184, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002566630695, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002566, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b80159d-2ad0-4081-a2fe-760c1c44de54", "db_session_id": "70IKZGC0PAQBQGU5NYTV", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002566631615, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002566, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b80159d-2ad0-4081-a2fe-760c1c44de54", "db_session_id": "70IKZGC0PAQBQGU5NYTV", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002566632426, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002566, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7b80159d-2ad0-4081-a2fe-760c1c44de54", "db_session_id": "70IKZGC0PAQBQGU5NYTV", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002566632986, "job": 1, "event": "recovery_finished"}
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55bdd46cfc00
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: DB pointer 0x55bdd46ae000
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Oct 09 09:36:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 09:36:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.04 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a590#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a590#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a590#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 09 09:36:06 compute-2 ceph-osd[11347]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct 09 09:36:06 compute-2 ceph-osd[11347]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct 09 09:36:06 compute-2 ceph-osd[11347]: _get_class not permitted to load lua
Oct 09 09:36:06 compute-2 ceph-osd[11347]: _get_class not permitted to load sdk
Oct 09 09:36:06 compute-2 ceph-osd[11347]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct 09 09:36:06 compute-2 ceph-osd[11347]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct 09 09:36:06 compute-2 ceph-osd[11347]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct 09 09:36:06 compute-2 ceph-osd[11347]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct 09 09:36:06 compute-2 ceph-osd[11347]: osd.2 0 load_pgs
Oct 09 09:36:06 compute-2 ceph-osd[11347]: osd.2 0 load_pgs opened 0 pgs
Oct 09 09:36:06 compute-2 ceph-osd[11347]: osd.2 0 log_to_monitors true
Oct 09 09:36:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2[11343]: 2025-10-09T09:36:06.645+0000 7ff04a68c740 -1 osd.2 0 log_to_monitors true
Oct 09 09:36:07 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:07 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:07 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:07 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.fxnvnn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 09 09:36:07 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.fxnvnn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 09 09:36:07 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:07 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:07 compute-2 ceph-mon[5983]: Deploying daemon rgw.rgw.compute-1.fxnvnn on compute-1
Oct 09 09:36:07 compute-2 ceph-mon[5983]: from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct 09 09:36:07 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2719329378' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct 09 09:36:07 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e32 e32: 3 total, 2 up, 3 in
Oct 09 09:36:07 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Oct 09 09:36:07 compute-2 ceph-mon[5983]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/573248088' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 09 09:36:07 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct 09 09:36:07 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct 09 09:36:08 compute-2 ceph-osd[11347]: osd.2 0 done with init, starting boot process
Oct 09 09:36:08 compute-2 ceph-osd[11347]: osd.2 0 start_boot
Oct 09 09:36:08 compute-2 ceph-osd[11347]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct 09 09:36:08 compute-2 ceph-osd[11347]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct 09 09:36:08 compute-2 ceph-osd[11347]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct 09 09:36:08 compute-2 ceph-osd[11347]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct 09 09:36:08 compute-2 ceph-osd[11347]: osd.2 0  bench count 12288000 bsize 4 KiB
Oct 09 09:36:08 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e33 e33: 3 total, 2 up, 3 in
Oct 09 09:36:08 compute-2 ceph-mon[5983]: pgmap v15: 39 pgs: 39 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail; 16 KiB/s rd, 0 B/s wr, 6 op/s
Oct 09 09:36:08 compute-2 ceph-mon[5983]: from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct 09 09:36:08 compute-2 ceph-mon[5983]: osdmap e32: 3 total, 2 up, 3 in
Oct 09 09:36:08 compute-2 ceph-mon[5983]: from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Oct 09 09:36:08 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:36:08 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/573248088' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 09 09:36:08 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct 09 09:36:08 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:08 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:08 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:08 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.yciajn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 09 09:36:08 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.yciajn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 09 09:36:08 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:08 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:08 compute-2 radosgw[12043]: rgw main: failed to create zonegroup with (17) File exists
Oct 09 09:36:08 compute-2 sudo[12846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:36:08 compute-2 sudo[12846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:08 compute-2 sudo[12846]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:08 compute-2 sudo[12871]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:36:08 compute-2 sudo[12871]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:09 compute-2 podman[12930]: 2025-10-09 09:36:09.227277314 +0000 UTC m=+0.037194362 container create 5e5ee788b024a047878ac66135e7c7831d58c73ed31a9bdaae6a201272b1ccbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brahmagupta, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:36:09 compute-2 ceph-osd[11347]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 86.253 iops: 22080.769 elapsed_sec: 0.136
Oct 09 09:36:09 compute-2 ceph-osd[11347]: log_channel(cluster) log [WRN] : OSD bench result of 22080.768566 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 09 09:36:09 compute-2 ceph-osd[11347]: osd.2 0 waiting for initial osdmap
Oct 09 09:36:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2[11343]: 2025-10-09T09:36:09.231+0000 7ff04660f640 -1 osd.2 0 waiting for initial osdmap
Oct 09 09:36:09 compute-2 ceph-osd[11347]: osd.2 33 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct 09 09:36:09 compute-2 ceph-osd[11347]: osd.2 33 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct 09 09:36:09 compute-2 ceph-osd[11347]: osd.2 33 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct 09 09:36:09 compute-2 ceph-osd[11347]: osd.2 33 check_osdmap_features require_osd_release unknown -> squid
Oct 09 09:36:09 compute-2 ceph-osd[11347]: osd.2 33 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 09 09:36:09 compute-2 ceph-osd[11347]: osd.2 33 set_numa_affinity not setting numa affinity
Oct 09 09:36:09 compute-2 ceph-osd[11347]: osd.2 33 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Oct 09 09:36:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-osd-2[11343]: 2025-10-09T09:36:09.253+0000 7ff041c37640 -1 osd.2 33 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct 09 09:36:09 compute-2 systemd[1]: Started libpod-conmon-5e5ee788b024a047878ac66135e7c7831d58c73ed31a9bdaae6a201272b1ccbb.scope.
Oct 09 09:36:09 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:36:09 compute-2 podman[12930]: 2025-10-09 09:36:09.285305475 +0000 UTC m=+0.095222533 container init 5e5ee788b024a047878ac66135e7c7831d58c73ed31a9bdaae6a201272b1ccbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brahmagupta, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:36:09 compute-2 podman[12930]: 2025-10-09 09:36:09.289721693 +0000 UTC m=+0.099638742 container start 5e5ee788b024a047878ac66135e7c7831d58c73ed31a9bdaae6a201272b1ccbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brahmagupta, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 09 09:36:09 compute-2 podman[12930]: 2025-10-09 09:36:09.291865576 +0000 UTC m=+0.101782644 container attach 5e5ee788b024a047878ac66135e7c7831d58c73ed31a9bdaae6a201272b1ccbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brahmagupta, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Oct 09 09:36:09 compute-2 sleepy_brahmagupta[12943]: 167 167
Oct 09 09:36:09 compute-2 systemd[1]: libpod-5e5ee788b024a047878ac66135e7c7831d58c73ed31a9bdaae6a201272b1ccbb.scope: Deactivated successfully.
Oct 09 09:36:09 compute-2 podman[12930]: 2025-10-09 09:36:09.293828378 +0000 UTC m=+0.103745425 container died 5e5ee788b024a047878ac66135e7c7831d58c73ed31a9bdaae6a201272b1ccbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brahmagupta, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:36:09 compute-2 systemd[1]: var-lib-containers-storage-overlay-388b38f905a4b18fb893559c0ecacd1631e6a986cccebe9542fadc1b7fb41c17-merged.mount: Deactivated successfully.
Oct 09 09:36:09 compute-2 podman[12930]: 2025-10-09 09:36:09.209617772 +0000 UTC m=+0.019534840 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:09 compute-2 podman[12930]: 2025-10-09 09:36:09.321179307 +0000 UTC m=+0.131096355 container remove 5e5ee788b024a047878ac66135e7c7831d58c73ed31a9bdaae6a201272b1ccbb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_brahmagupta, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 09 09:36:09 compute-2 systemd[1]: libpod-conmon-5e5ee788b024a047878ac66135e7c7831d58c73ed31a9bdaae6a201272b1ccbb.scope: Deactivated successfully.
Oct 09 09:36:09 compute-2 systemd[1]: Reloading.
Oct 09 09:36:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:09 compute-2 systemd-rc-local-generator[12978]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:36:09 compute-2 systemd-sysv-generator[12981]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:36:09 compute-2 ceph-mon[5983]: Deploying daemon rgw.rgw.compute-0.yciajn on compute-0
Oct 09 09:36:09 compute-2 ceph-mon[5983]: from='osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Oct 09 09:36:09 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct 09 09:36:09 compute-2 ceph-mon[5983]: osdmap e33: 3 total, 2 up, 3 in
Oct 09 09:36:09 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:36:09 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:36:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3729780142' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct 09 09:36:09 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:09 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:09 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:09 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:09 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:09 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zfggbi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 09 09:36:09 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.zfggbi", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 09 09:36:09 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e34 e34: 3 total, 3 up, 3 in
Oct 09 09:36:09 compute-2 ceph-osd[11347]: osd.2 34 state: booting -> active
Oct 09 09:36:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Oct 09 09:36:09 compute-2 ceph-mon[5983]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 09 09:36:09 compute-2 systemd[1]: Reloading.
Oct 09 09:36:09 compute-2 systemd-rc-local-generator[13017]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:36:09 compute-2 systemd-sysv-generator[13020]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:36:09 compute-2 systemd[1]: Starting Ceph mds.cephfs.compute-2.zfggbi for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:36:09 compute-2 podman[13073]: 2025-10-09 09:36:09.907911399 +0000 UTC m=+0.026801595 container create 4ea5b01b8fc8974beabfdcda1518ee1405a7f5f6721211912230be092ba90d34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-2-zfggbi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 09 09:36:09 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a15a96f37dcae3a45c8bf1feeee5397aa375ad70ca45915cd0816321587e904/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:09 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a15a96f37dcae3a45c8bf1feeee5397aa375ad70ca45915cd0816321587e904/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:09 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a15a96f37dcae3a45c8bf1feeee5397aa375ad70ca45915cd0816321587e904/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:09 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a15a96f37dcae3a45c8bf1feeee5397aa375ad70ca45915cd0816321587e904/merged/var/lib/ceph/mds/ceph-cephfs.compute-2.zfggbi supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:09 compute-2 podman[13073]: 2025-10-09 09:36:09.955029264 +0000 UTC m=+0.073919470 container init 4ea5b01b8fc8974beabfdcda1518ee1405a7f5f6721211912230be092ba90d34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-2-zfggbi, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:36:09 compute-2 podman[13073]: 2025-10-09 09:36:09.959087166 +0000 UTC m=+0.077977362 container start 4ea5b01b8fc8974beabfdcda1518ee1405a7f5f6721211912230be092ba90d34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-2-zfggbi, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 09 09:36:09 compute-2 bash[13073]: 4ea5b01b8fc8974beabfdcda1518ee1405a7f5f6721211912230be092ba90d34
Oct 09 09:36:09 compute-2 podman[13073]: 2025-10-09 09:36:09.896821825 +0000 UTC m=+0.015712042 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:09 compute-2 systemd[1]: Started Ceph mds.cephfs.compute-2.zfggbi for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:36:09 compute-2 ceph-mds[13089]: set uid:gid to 167:167 (ceph:ceph)
Oct 09 09:36:09 compute-2 ceph-mds[13089]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Oct 09 09:36:09 compute-2 ceph-mds[13089]: main not setting numa affinity
Oct 09 09:36:09 compute-2 ceph-mds[13089]: pidfile_write: ignore empty --pid-file
Oct 09 09:36:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-2-zfggbi[13085]: starting mds.cephfs.compute-2.zfggbi at 
Oct 09 09:36:09 compute-2 sudo[12871]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:09 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi Updating MDS map to version 2 from mon.0
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 34 pg[2.1c( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 34 pg[2.1d( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 34 pg[2.5( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 34 pg[2.f( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 34 pg[2.b( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 34 pg[2.12( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 34 pg[2.18( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:36:10 compute-2 ceph-mon[5983]: purged_snaps scrub starts
Oct 09 09:36:10 compute-2 ceph-mon[5983]: purged_snaps scrub ok
Oct 09 09:36:10 compute-2 ceph-mon[5983]: pgmap v18: 40 pgs: 1 unknown, 39 active+clean; 449 KiB data, 54 MiB used, 40 GiB / 40 GiB avail
Oct 09 09:36:10 compute-2 ceph-mon[5983]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Oct 09 09:36:10 compute-2 ceph-mon[5983]: Deploying daemon mds.cephfs.compute-2.zfggbi on compute-2
Oct 09 09:36:10 compute-2 ceph-mon[5983]: OSD bench result of 22080.768566 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct 09 09:36:10 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:36:10 compute-2 ceph-mon[5983]: osd.2 [v2:192.168.122.102:6800/4056276867,v1:192.168.122.102:6801/4056276867] boot
Oct 09 09:36:10 compute-2 ceph-mon[5983]: osdmap e34: 3 total, 3 up, 3 in
Oct 09 09:36:10 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:36:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 09 09:36:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 09 09:36:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 09 09:36:10 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 09 09:36:10 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct 09 09:36:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2574318436' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct 09 09:36:10 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:10 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:10 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:10 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjwyle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 09 09:36:10 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.wjwyle", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 09 09:36:10 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:10 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e35 e35: 3 total, 3 up, 3 in
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.15( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.10( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:36:10 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi Updating MDS map to version 3 from mon.0
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.12( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.1c( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.b( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.f( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.1d( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[3.0( empty local-lis/les=0/0 n=0 ec=11/11 lis/c=11/11 les/c/f=12/12/0 sis=34) [2] r=0 lpr=35 pi=[11,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:36:10 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e3 new map
Oct 09 09:36:10 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e3 print_map
                                          e3
                                          btime 2025-10-09T09:36:10:513915+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        2
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T09:35:51.790428+0000
                                          modified        2025-10-09T09:35:51.790428+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        
                                          up        {}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        0
                                          qdb_cluster        leader: 0 members: 
                                           
                                           
                                          Standby daemons:
                                           
                                          [mds.cephfs.compute-2.zfggbi{-1:14535} state up:standby seq 1 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=34) [2] r=0 lpr=35 pi=[13,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.1b( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.d( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.a( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.c( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.13( empty local-lis/les=0/0 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.18( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.5( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=16/16 les/c/f=17/17/0 sis=34) [2] r=0 lpr=34 pi=[16,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:36:10 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi Monitors have assigned me to become a standby
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.10( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[5.0( empty local-lis/les=34/35 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=34) [2] r=0 lpr=35 pi=[13,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[3.0( empty local-lis/les=34/35 n=0 ec=11/11 lis/c=11/11 les/c/f=12/12/0 sis=34) [2] r=0 lpr=35 pi=[11,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.1b( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.d( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.a( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.c( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.13( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:36:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 35 pg[2.15( empty local-lis/les=34/35 n=0 ec=16/10 lis/c=22/22 les/c/f=23/23/0 sis=34) [2] r=0 lpr=35 pi=[22,34)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:36:10 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi Updating MDS map to version 4 from mon.0
Oct 09 09:36:10 compute-2 ceph-mds[13089]: mds.0.4 handle_mds_map I am now mds.0.4
Oct 09 09:36:10 compute-2 ceph-mds[13089]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Oct 09 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x1
Oct 09 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x100
Oct 09 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x600
Oct 09 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x601
Oct 09 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x602
Oct 09 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x603
Oct 09 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x604
Oct 09 09:36:10 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e4 new map
Oct 09 09:36:10 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e4 print_map
                                          e4
                                          btime 2025-10-09T09:36:10:526987+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        4
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T09:35:51.790428+0000
                                          modified        2025-10-09T09:36:10.526981+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        0
                                          up        {0=14535}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        0
                                          qdb_cluster        leader: 0 members: 
                                          [mds.cephfs.compute-2.zfggbi{0:14535} state up:creating seq 1 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]
                                           
                                           
Oct 09 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x605
Oct 09 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x606
Oct 09 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x607
Oct 09 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x608
Oct 09 09:36:10 compute-2 ceph-mds[13089]: mds.0.cache creating system inode with ino:0x609
Oct 09 09:36:10 compute-2 ceph-mds[13089]: mds.0.4 creating_done
Oct 09 09:36:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e36 e36: 3 total, 3 up, 3 in
Oct 09 09:36:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Oct 09 09:36:11 compute-2 ceph-mon[5983]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 09 09:36:11 compute-2 ceph-mon[5983]: Deploying daemon mds.cephfs.compute-0.wjwyle on compute-0
Oct 09 09:36:11 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 09 09:36:11 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 09 09:36:11 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct 09 09:36:11 compute-2 ceph-mon[5983]: osdmap e35: 3 total, 3 up, 3 in
Oct 09 09:36:11 compute-2 ceph-mon[5983]: mds.? [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] up:boot
Oct 09 09:36:11 compute-2 ceph-mon[5983]: daemon mds.cephfs.compute-2.zfggbi assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct 09 09:36:11 compute-2 ceph-mon[5983]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct 09 09:36:11 compute-2 ceph-mon[5983]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct 09 09:36:11 compute-2 ceph-mon[5983]: Cluster is now healthy
Oct 09 09:36:11 compute-2 ceph-mon[5983]: fsmap cephfs:0 1 up:standby
Oct 09 09:36:11 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.zfggbi"}]: dispatch
Oct 09 09:36:11 compute-2 ceph-mon[5983]: fsmap cephfs:1 {0=cephfs.compute-2.zfggbi=up:creating}
Oct 09 09:36:11 compute-2 ceph-mon[5983]: daemon mds.cephfs.compute-2.zfggbi is now active in filesystem cephfs as rank 0
Oct 09 09:36:11 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:11 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:11 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:11 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:11 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.svghvn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct 09 09:36:11 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.svghvn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct 09 09:36:11 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:11 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi Updating MDS map to version 5 from mon.0
Oct 09 09:36:11 compute-2 ceph-mds[13089]: mds.0.4 handle_mds_map I am now mds.0.4
Oct 09 09:36:11 compute-2 ceph-mds[13089]: mds.0.4 handle_mds_map state change up:creating --> up:active
Oct 09 09:36:11 compute-2 ceph-mds[13089]: mds.0.4 recovery_done -- successful recovery!
Oct 09 09:36:11 compute-2 ceph-mds[13089]: mds.0.4 active_start
Oct 09 09:36:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e5 new map
Oct 09 09:36:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e5 print_map
                                          e5
                                          btime 2025-10-09T09:36:11:555720+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        5
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T09:35:51.790428+0000
                                          modified        2025-10-09T09:36:11.555718+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        0
                                          up        {0=14535}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        0
                                          qdb_cluster        leader: 14535 members: 14535
                                          [mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 2 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]
                                           
                                           
                                          Standby daemons:
                                           
                                          [mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]
Oct 09 09:36:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e6 new map
Oct 09 09:36:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e6 print_map
                                          e6
                                          btime 2025-10-09T09:36:11:561187+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        5
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T09:35:51.790428+0000
                                          modified        2025-10-09T09:36:11.555718+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        0
                                          up        {0=14535}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        1
                                          qdb_cluster        leader: 14535 members: 14535
                                          [mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 2 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]
                                           
                                           
                                          Standby daemons:
                                           
                                          [mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]
Oct 09 09:36:12 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e37 e37: 3 total, 3 up, 3 in
Oct 09 09:36:12 compute-2 ceph-mon[5983]: pgmap v21: 41 pgs: 7 peering, 2 unknown, 32 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:12 compute-2 ceph-mon[5983]: Deploying daemon mds.cephfs.compute-1.svghvn on compute-1
Oct 09 09:36:12 compute-2 ceph-mon[5983]: osdmap e36: 3 total, 3 up, 3 in
Oct 09 09:36:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 09 09:36:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 09 09:36:12 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 09 09:36:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 09 09:36:12 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct 09 09:36:12 compute-2 ceph-mon[5983]: mds.? [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] up:active
Oct 09 09:36:12 compute-2 ceph-mon[5983]: mds.? [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] up:boot
Oct 09 09:36:12 compute-2 ceph-mon[5983]: fsmap cephfs:1 {0=cephfs.compute-2.zfggbi=up:active} 1 up:standby
Oct 09 09:36:12 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.wjwyle"}]: dispatch
Oct 09 09:36:12 compute-2 ceph-mon[5983]: fsmap cephfs:1 {0=cephfs.compute-2.zfggbi=up:active} 1 up:standby
Oct 09 09:36:12 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:12 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:12 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:12 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:12 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 09 09:36:12 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 09 09:36:12 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct 09 09:36:12 compute-2 ceph-mon[5983]: osdmap e37: 3 total, 3 up, 3 in
Oct 09 09:36:12 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e7 new map
Oct 09 09:36:12 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e7 print_map
                                          e7
                                          btime 2025-10-09T09:36:12:564873+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        5
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T09:35:51.790428+0000
                                          modified        2025-10-09T09:36:11.555718+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        0
                                          up        {0=14535}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        1
                                          qdb_cluster        leader: 14535 members: 14535
                                          [mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 2 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]
                                           
                                           
                                          Standby daemons:
                                           
                                          [mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]
                                          [mds.cephfs.compute-1.svghvn{-1:24317} state up:standby seq 1 addr [v2:192.168.122.101:6804/3081136732,v1:192.168.122.101:6805/3081136732] compat {c=[1],r=[1],i=[1fff]}]
Oct 09 09:36:13 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e38 e38: 3 total, 3 up, 3 in
Oct 09 09:36:13 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Oct 09 09:36:13 compute-2 ceph-mon[5983]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 09 09:36:13 compute-2 ceph-mon[5983]: Deploying daemon alertmanager.compute-0 on compute-0
Oct 09 09:36:13 compute-2 ceph-mon[5983]: mds.? [v2:192.168.122.101:6804/3081136732,v1:192.168.122.101:6805/3081136732] up:boot
Oct 09 09:36:13 compute-2 ceph-mon[5983]: fsmap cephfs:1 {0=cephfs.compute-2.zfggbi=up:active} 2 up:standby
Oct 09 09:36:13 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.svghvn"}]: dispatch
Oct 09 09:36:13 compute-2 ceph-mon[5983]: osdmap e38: 3 total, 3 up, 3 in
Oct 09 09:36:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e39 e39: 3 total, 3 up, 3 in
Oct 09 09:36:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Oct 09 09:36:14 compute-2 ceph-mon[5983]: log_channel(audit) log [INF] : from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 09 09:36:14 compute-2 ceph-mon[5983]: pgmap v24: 42 pgs: 1 creating+peering, 7 peering, 34 active+clean; 452 KiB data, 480 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 5.2 KiB/s wr, 21 op/s
Oct 09 09:36:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 09 09:36:14 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 09 09:36:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 09 09:36:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 09 09:36:14 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct 09 09:36:14 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 09 09:36:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 09 09:36:14 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct 09 09:36:14 compute-2 ceph-mon[5983]: osdmap e39: 3 total, 3 up, 3 in
Oct 09 09:36:15 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e40 e40: 3 total, 3 up, 3 in
Oct 09 09:36:15 compute-2 ceph-mds[13089]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Oct 09 09:36:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mds-cephfs-compute-2-zfggbi[13085]: 2025-10-09T09:36:15.537+0000 7ff40218f640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Oct 09 09:36:15 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 09 09:36:15 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1928624186' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 09 09:36:15 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2454302699' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 09 09:36:15 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 09 09:36:15 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct 09 09:36:15 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:15 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:15 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:15 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:15 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:15 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:15 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct 09 09:36:15 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:15 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3877219415' entity='client.rgw.rgw.compute-0.yciajn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 09 09:36:15 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-2.mbbcec' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 09 09:36:15 compute-2 ceph-mon[5983]: from='client.? ' entity='client.rgw.rgw.compute-1.fxnvnn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct 09 09:36:15 compute-2 ceph-mon[5983]: osdmap e40: 3 total, 3 up, 3 in
Oct 09 09:36:15 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi Updating MDS map to version 8 from mon.0
Oct 09 09:36:15 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e8 new map
Oct 09 09:36:15 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e8 print_map
                                          e8
                                          btime 2025-10-09T09:36:15:540254+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        8
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T09:35:51.790428+0000
                                          modified        2025-10-09T09:36:14.585925+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        0
                                          up        {0=14535}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        1
                                          qdb_cluster        leader: 14535 members: 14535
                                          [mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]
                                           
                                           
                                          Standby daemons:
                                           
                                          [mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]
                                          [mds.cephfs.compute-1.svghvn{-1:24317} state up:standby seq 1 addr [v2:192.168.122.101:6804/3081136732,v1:192.168.122.101:6805/3081136732] compat {c=[1],r=[1],i=[1fff]}]
Oct 09 09:36:15 compute-2 radosgw[12043]: v1 topic migration: starting v1 topic migration..
Oct 09 09:36:15 compute-2 radosgw[12043]: LDAP not started since no server URIs were provided in the configuration.
Oct 09 09:36:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-rgw-rgw-compute-2-mbbcec[12039]: 2025-10-09T09:36:15.598+0000 7feaf1e2d980 -1 LDAP not started since no server URIs were provided in the configuration.
Oct 09 09:36:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct 09 09:36:15 compute-2 radosgw[12043]: v1 topic migration: finished v1 topic migration
Oct 09 09:36:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct 09 09:36:15 compute-2 radosgw[12043]: framework: beast
Oct 09 09:36:15 compute-2 radosgw[12043]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct 09 09:36:15 compute-2 radosgw[12043]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct 09 09:36:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct 09 09:36:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct 09 09:36:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Oct 09 09:36:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Oct 09 09:36:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Oct 09 09:36:15 compute-2 radosgw[12043]: starting handler: beast
Oct 09 09:36:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct 09 09:36:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Oct 09 09:36:15 compute-2 radosgw[12043]: set uid:gid to 167:167 (ceph:ceph)
Oct 09 09:36:15 compute-2 radosgw[12043]: mgrc service_daemon_register rgw.24283 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-2,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC 7763 64-Core Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.102:8082,frontend_type#0=beast,hostname=compute-2,id=rgw.compute-2.mbbcec,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7865152,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=773beadf-adcd-43ff-a482-a2d7a5b40bd8,zone_name=default,zonegroup_id=74fea7f9-d931-4447-a756-db2299521313,zonegroup_name=default}
Oct 09 09:36:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Oct 09 09:36:16 compute-2 ceph-mon[5983]: pgmap v27: 43 pgs: 1 unknown, 1 creating+peering, 7 peering, 34 active+clean; 452 KiB data, 480 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 5.2 KiB/s wr, 21 op/s
Oct 09 09:36:16 compute-2 ceph-mon[5983]: Regenerating cephadm self-signed grafana TLS certificates
Oct 09 09:36:16 compute-2 ceph-mon[5983]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Oct 09 09:36:16 compute-2 ceph-mon[5983]: Deploying daemon grafana.compute-0 on compute-0
Oct 09 09:36:16 compute-2 ceph-mon[5983]: mds.? [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] up:active
Oct 09 09:36:16 compute-2 ceph-mon[5983]: mds.? [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] up:standby
Oct 09 09:36:16 compute-2 ceph-mon[5983]: fsmap cephfs:1 {0=cephfs.compute-2.zfggbi=up:active} 2 up:standby
Oct 09 09:36:16 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:16 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e9 new map
Oct 09 09:36:16 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).mds e9 print_map
                                          e9
                                          btime 2025-10-09T09:36:16:832969+0000
                                          enable_multiple, ever_enabled_multiple: 1,1
                                          default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          legacy client fscid: 1
                                           
                                          Filesystem 'cephfs' (1)
                                          fs_name        cephfs
                                          epoch        8
                                          flags        12 joinable allow_snaps allow_multimds_snaps
                                          created        2025-10-09T09:35:51.790428+0000
                                          modified        2025-10-09T09:36:14.585925+0000
                                          tableserver        0
                                          root        0
                                          session_timeout        60
                                          session_autoclose        300
                                          max_file_size        1099511627776
                                          max_xattr_size        65536
                                          required_client_features        {}
                                          last_failure        0
                                          last_failure_osd_epoch        0
                                          compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                          max_mds        1
                                          in        0
                                          up        {0=14535}
                                          failed        
                                          damaged        
                                          stopped        
                                          data_pools        [7]
                                          metadata_pool        6
                                          inline_data        disabled
                                          balancer        
                                          bal_rank_mask        -1
                                          standby_count_wanted        1
                                          qdb_cluster        leader: 14535 members: 14535
                                          [mds.cephfs.compute-2.zfggbi{0:14535} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1047568798,v1:192.168.122.102:6805/1047568798] compat {c=[1],r=[1],i=[1fff]}]
                                           
                                           
                                          Standby daemons:
                                           
                                          [mds.cephfs.compute-0.wjwyle{-1:14541} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2471701871,v1:192.168.122.100:6807/2471701871] compat {c=[1],r=[1],i=[1fff]}]
                                          [mds.cephfs.compute-1.svghvn{-1:24317} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/3081136732,v1:192.168.122.101:6805/3081136732] compat {c=[1],r=[1],i=[1fff]}]
Oct 09 09:36:17 compute-2 ceph-mon[5983]: pgmap v29: 43 pgs: 1 unknown, 1 creating+peering, 41 active+clean; 452 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:17 compute-2 ceph-mon[5983]: mds.? [v2:192.168.122.101:6804/3081136732,v1:192.168.122.101:6805/3081136732] up:standby
Oct 09 09:36:17 compute-2 ceph-mon[5983]: fsmap cephfs:1 {0=cephfs.compute-2.zfggbi=up:active} 2 up:standby
Oct 09 09:36:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:19 compute-2 ceph-mon[5983]: pgmap v30: 43 pgs: 43 active+clean; 456 KiB data, 485 MiB used, 60 GiB / 60 GiB avail; 230 KiB/s rd, 5.7 KiB/s wr, 422 op/s
Oct 09 09:36:22 compute-2 ceph-mon[5983]: pgmap v31: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 191 KiB/s rd, 4.7 KiB/s wr, 350 op/s
Oct 09 09:36:22 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:22 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:22 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:22 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:22 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:22 compute-2 ceph-mon[5983]: Deploying daemon haproxy.rgw.default.compute-0.kmcywb on compute-0
Oct 09 09:36:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:24 compute-2 ceph-mon[5983]: pgmap v32: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 168 KiB/s rd, 4.1 KiB/s wr, 307 op/s
Oct 09 09:36:26 compute-2 ceph-mon[5983]: pgmap v33: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 138 KiB/s rd, 3.4 KiB/s wr, 253 op/s
Oct 09 09:36:26 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:26 compute-2 sudo[13157]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:36:26 compute-2 sudo[13157]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:26 compute-2 sudo[13157]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:26 compute-2 sudo[13182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:36:26 compute-2 sudo[13182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:27 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:27 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:27 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:27 compute-2 ceph-mon[5983]: Deploying daemon haproxy.rgw.default.compute-2.gkeojf on compute-2
Oct 09 09:36:27 compute-2 ceph-mon[5983]: pgmap v34: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 123 KiB/s rd, 3.0 KiB/s wr, 225 op/s
Oct 09 09:36:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:36:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:27.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:36:29 compute-2 podman[13239]: 2025-10-09 09:36:29.031044303 +0000 UTC m=+2.073946944 container create 0e904bbca9eaa00819fa4e83ed7938fe54ac06edaa0e9b05c90ab1d765138484 (image=quay.io/ceph/haproxy:2.3, name=epic_franklin)
Oct 09 09:36:29 compute-2 systemd[1]: Started libpod-conmon-0e904bbca9eaa00819fa4e83ed7938fe54ac06edaa0e9b05c90ab1d765138484.scope.
Oct 09 09:36:29 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:36:29 compute-2 podman[13239]: 2025-10-09 09:36:29.019898213 +0000 UTC m=+2.062800874 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 09 09:36:29 compute-2 podman[13239]: 2025-10-09 09:36:29.083021062 +0000 UTC m=+2.125923703 container init 0e904bbca9eaa00819fa4e83ed7938fe54ac06edaa0e9b05c90ab1d765138484 (image=quay.io/ceph/haproxy:2.3, name=epic_franklin)
Oct 09 09:36:29 compute-2 podman[13239]: 2025-10-09 09:36:29.087747856 +0000 UTC m=+2.130650497 container start 0e904bbca9eaa00819fa4e83ed7938fe54ac06edaa0e9b05c90ab1d765138484 (image=quay.io/ceph/haproxy:2.3, name=epic_franklin)
Oct 09 09:36:29 compute-2 podman[13239]: 2025-10-09 09:36:29.088797345 +0000 UTC m=+2.131699985 container attach 0e904bbca9eaa00819fa4e83ed7938fe54ac06edaa0e9b05c90ab1d765138484 (image=quay.io/ceph/haproxy:2.3, name=epic_franklin)
Oct 09 09:36:29 compute-2 systemd[1]: libpod-0e904bbca9eaa00819fa4e83ed7938fe54ac06edaa0e9b05c90ab1d765138484.scope: Deactivated successfully.
Oct 09 09:36:29 compute-2 conmon[13336]: conmon 0e904bbca9eaa00819fa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0e904bbca9eaa00819fa4e83ed7938fe54ac06edaa0e9b05c90ab1d765138484.scope/container/memory.events
Oct 09 09:36:29 compute-2 epic_franklin[13336]: 0 0
Oct 09 09:36:29 compute-2 podman[13239]: 2025-10-09 09:36:29.09124959 +0000 UTC m=+2.134152231 container died 0e904bbca9eaa00819fa4e83ed7938fe54ac06edaa0e9b05c90ab1d765138484 (image=quay.io/ceph/haproxy:2.3, name=epic_franklin)
Oct 09 09:36:29 compute-2 systemd[1]: var-lib-containers-storage-overlay-b47882e98ba870f162a06c619caeaf9062ee85ff21168da7ed7721e9b4dad5f1-merged.mount: Deactivated successfully.
Oct 09 09:36:29 compute-2 podman[13239]: 2025-10-09 09:36:29.10926881 +0000 UTC m=+2.152171451 container remove 0e904bbca9eaa00819fa4e83ed7938fe54ac06edaa0e9b05c90ab1d765138484 (image=quay.io/ceph/haproxy:2.3, name=epic_franklin)
Oct 09 09:36:29 compute-2 systemd[1]: libpod-conmon-0e904bbca9eaa00819fa4e83ed7938fe54ac06edaa0e9b05c90ab1d765138484.scope: Deactivated successfully.
Oct 09 09:36:29 compute-2 systemd[1]: Reloading.
Oct 09 09:36:29 compute-2 systemd-rc-local-generator[13374]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:36:29 compute-2 systemd-sysv-generator[13377]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:36:29 compute-2 systemd[1]: Reloading.
Oct 09 09:36:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:29 compute-2 systemd-sysv-generator[13423]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:36:29 compute-2 systemd-rc-local-generator[13417]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:36:29 compute-2 systemd[1]: Starting Ceph haproxy.rgw.default.compute-2.gkeojf for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:36:29 compute-2 podman[13470]: 2025-10-09 09:36:29.696329843 +0000 UTC m=+0.026917372 container create 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct 09 09:36:29 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/da51568ca7bb05713a3e973fcd8e649070918a706c3c684eeb0b713f43496906/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:29 compute-2 podman[13470]: 2025-10-09 09:36:29.732226778 +0000 UTC m=+0.062814318 container init 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct 09 09:36:29 compute-2 podman[13470]: 2025-10-09 09:36:29.736175144 +0000 UTC m=+0.066762674 container start 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct 09 09:36:29 compute-2 bash[13470]: 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c
Oct 09 09:36:29 compute-2 podman[13470]: 2025-10-09 09:36:29.685371076 +0000 UTC m=+0.015958626 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 09 09:36:29 compute-2 systemd[1]: Started Ceph haproxy.rgw.default.compute-2.gkeojf for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:36:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf[13482]: [NOTICE] 281/093629 (2) : New worker #1 (4) forked
Oct 09 09:36:29 compute-2 sudo[13182]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:29 compute-2 ceph-mon[5983]: pgmap v35: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 115 KiB/s rd, 2.8 KiB/s wr, 211 op/s
Oct 09 09:36:29 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:29 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:29 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:29 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:29 compute-2 sudo[13492]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:36:29 compute-2 sudo[13492]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:29 compute-2 sudo[13492]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:29 compute-2 sudo[13517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:36:29 compute-2 sudo[13517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:29.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:30 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 09 09:36:30 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 09 09:36:30 compute-2 ceph-mon[5983]: Deploying daemon keepalived.rgw.default.compute-2.tcjodw on compute-2
Oct 09 09:36:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:31.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:31 compute-2 ceph-mon[5983]: pgmap v36: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000016s ======
Oct 09 09:36:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:31.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct 09 09:36:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:33.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:33 compute-2 podman[13575]: 2025-10-09 09:36:33.346337884 +0000 UTC m=+3.211536467 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 09 09:36:33 compute-2 podman[13575]: 2025-10-09 09:36:33.355420378 +0000 UTC m=+3.220618942 container create 61cf5003552e9090f4a83b78bf22e92e68c70821b377cddb7dea764c67c13061 (image=quay.io/ceph/keepalived:2.2.4, name=trusting_cerf, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, vcs-type=git, version=2.2.4, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 09 09:36:33 compute-2 systemd[1]: Started libpod-conmon-61cf5003552e9090f4a83b78bf22e92e68c70821b377cddb7dea764c67c13061.scope.
Oct 09 09:36:33 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:36:33 compute-2 podman[13575]: 2025-10-09 09:36:33.394167345 +0000 UTC m=+3.259365929 container init 61cf5003552e9090f4a83b78bf22e92e68c70821b377cddb7dea764c67c13061 (image=quay.io/ceph/keepalived:2.2.4, name=trusting_cerf, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, description=keepalived for Ceph, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2023-02-22T09:23:20, name=keepalived, io.buildah.version=1.28.2, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, io.openshift.tags=Ceph keepalived, version=2.2.4, io.openshift.expose-services=)
Oct 09 09:36:33 compute-2 podman[13575]: 2025-10-09 09:36:33.398734793 +0000 UTC m=+3.263933356 container start 61cf5003552e9090f4a83b78bf22e92e68c70821b377cddb7dea764c67c13061 (image=quay.io/ceph/keepalived:2.2.4, name=trusting_cerf, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, name=keepalived, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, version=2.2.4, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793)
Oct 09 09:36:33 compute-2 podman[13575]: 2025-10-09 09:36:33.399659342 +0000 UTC m=+3.264857895 container attach 61cf5003552e9090f4a83b78bf22e92e68c70821b377cddb7dea764c67c13061 (image=quay.io/ceph/keepalived:2.2.4, name=trusting_cerf, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, version=2.2.4, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.openshift.tags=Ceph keepalived, release=1793, io.buildah.version=1.28.2)
Oct 09 09:36:33 compute-2 trusting_cerf[13656]: 0 0
Oct 09 09:36:33 compute-2 systemd[1]: libpod-61cf5003552e9090f4a83b78bf22e92e68c70821b377cddb7dea764c67c13061.scope: Deactivated successfully.
Oct 09 09:36:33 compute-2 conmon[13656]: conmon 61cf5003552e9090f4a8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-61cf5003552e9090f4a83b78bf22e92e68c70821b377cddb7dea764c67c13061.scope/container/memory.events
Oct 09 09:36:33 compute-2 podman[13575]: 2025-10-09 09:36:33.403451903 +0000 UTC m=+3.268650466 container died 61cf5003552e9090f4a83b78bf22e92e68c70821b377cddb7dea764c67c13061 (image=quay.io/ceph/keepalived:2.2.4, name=trusting_cerf, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, name=keepalived, architecture=x86_64, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, description=keepalived for Ceph, com.redhat.component=keepalived-container, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Oct 09 09:36:33 compute-2 systemd[1]: var-lib-containers-storage-overlay-9404fe28d29a4813fcec7025c71e249f7a0f9fadff88ee9928052bf3728563b7-merged.mount: Deactivated successfully.
Oct 09 09:36:33 compute-2 podman[13575]: 2025-10-09 09:36:33.430379872 +0000 UTC m=+3.295578435 container remove 61cf5003552e9090f4a83b78bf22e92e68c70821b377cddb7dea764c67c13061 (image=quay.io/ceph/keepalived:2.2.4, name=trusting_cerf, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1793, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, architecture=x86_64, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4)
Oct 09 09:36:33 compute-2 systemd[1]: libpod-conmon-61cf5003552e9090f4a83b78bf22e92e68c70821b377cddb7dea764c67c13061.scope: Deactivated successfully.
Oct 09 09:36:33 compute-2 systemd[1]: Reloading.
Oct 09 09:36:33 compute-2 systemd-rc-local-generator[13696]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:36:33 compute-2 systemd-sysv-generator[13699]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:36:33 compute-2 systemd[1]: Reloading.
Oct 09 09:36:33 compute-2 systemd-rc-local-generator[13733]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:36:33 compute-2 systemd-sysv-generator[13742]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:36:33 compute-2 ceph-mon[5983]: pgmap v37: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:33.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:33 compute-2 systemd[1]: Starting Ceph keepalived.rgw.default.compute-2.tcjodw for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:36:34 compute-2 podman[13791]: 2025-10-09 09:36:34.053476042 +0000 UTC m=+0.027770684 container create a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, vendor=Red Hat, Inc., version=2.2.4, distribution-scope=public, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git)
Oct 09 09:36:34 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709a7a8332ebb2081a749167efe33fdf7251040b3cf49bf74e854e3ff5ef17ef/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:34 compute-2 podman[13791]: 2025-10-09 09:36:34.092480777 +0000 UTC m=+0.066775419 container init a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, distribution-scope=public, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, release=1793, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, build-date=2023-02-22T09:23:20, vendor=Red Hat, Inc.)
Oct 09 09:36:34 compute-2 podman[13791]: 2025-10-09 09:36:34.097493216 +0000 UTC m=+0.071787857 container start a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, version=2.2.4, distribution-scope=public, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9)
Oct 09 09:36:34 compute-2 bash[13791]: a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b
Oct 09 09:36:34 compute-2 podman[13791]: 2025-10-09 09:36:34.042148262 +0000 UTC m=+0.016442914 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 09 09:36:34 compute-2 systemd[1]: Started Ceph keepalived.rgw.default.compute-2.tcjodw for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:36:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:34 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct 09 09:36:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:34 2025: Running on Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 (built for Linux 5.14.0)
Oct 09 09:36:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:34 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct 09 09:36:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:34 2025: Configuration file /etc/keepalived/keepalived.conf
Oct 09 09:36:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:34 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct 09 09:36:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:34 2025: Starting VRRP child process, pid=4
Oct 09 09:36:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:34 2025: Startup complete
Oct 09 09:36:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:34 2025: (VI_0) Entering BACKUP STATE (init)
Oct 09 09:36:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:34 2025: VRRP_Script(check_backend) succeeded
Oct 09 09:36:34 compute-2 sudo[13517]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:35.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:35 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:35 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:35 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:35 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 09 09:36:35 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 09 09:36:35 compute-2 ceph-mon[5983]: Deploying daemon keepalived.rgw.default.compute-0.uozjha on compute-0
Oct 09 09:36:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:35.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:36 compute-2 ceph-mon[5983]: pgmap v38: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:37.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:37 2025: (VI_0) Entering MASTER STATE
Oct 09 09:36:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:37.899 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:38 compute-2 ceph-mon[5983]: pgmap v39: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:38 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:38 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:38 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:38 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:39.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:39 compute-2 ceph-mon[5983]: Deploying daemon prometheus.compute-0 on compute-0
Oct 09 09:36:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:39.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:40 compute-2 ceph-mon[5983]: pgmap v40: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:41.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:41 2025: (VI_0) Master received advert from 192.168.122.100 with higher priority 100, ours 90
Oct 09 09:36:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:36:41 2025: (VI_0) Entering BACKUP STATE
Oct 09 09:36:41 compute-2 ceph-mon[5983]: pgmap v41: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:41 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:41.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000016s ======
Oct 09 09:36:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:43.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct 09 09:36:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:43.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:43 compute-2 ceph-mon[5983]: pgmap v42: 43 pgs: 43 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:43 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:43 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:43 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:43 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Oct 09 09:36:43 compute-2 ceph-mgr[6264]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct 09 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct 09 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct 09 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  1: '-n'
Oct 09 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  2: 'mgr.compute-2.takdnm'
Oct 09 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  3: '-f'
Oct 09 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  4: '--setuser'
Oct 09 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  5: 'ceph'
Oct 09 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  6: '--setgroup'
Oct 09 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  7: 'ceph'
Oct 09 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  8: '--default-log-to-file=false'
Oct 09 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  9: '--default-log-to-journald=true'
Oct 09 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn  10: '--default-log-to-stderr=false'
Oct 09 09:36:43 compute-2 ceph-mgr[6264]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Oct 09 09:36:44 compute-2 sshd-session[9033]: Connection closed by 192.168.122.100 port 52662
Oct 09 09:36:44 compute-2 sshd-session[9014]: pam_unix(sshd:session): session closed for user ceph-admin
Oct 09 09:36:44 compute-2 systemd[1]: session-18.scope: Deactivated successfully.
Oct 09 09:36:44 compute-2 systemd[1]: session-18.scope: Consumed 16.429s CPU time.
Oct 09 09:36:44 compute-2 systemd-logind[800]: Session 18 logged out. Waiting for processes to exit.
Oct 09 09:36:44 compute-2 systemd-logind[800]: Removed session 18.
Oct 09 09:36:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: ignoring --setuser ceph since I am not root
Oct 09 09:36:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: ignoring --setgroup ceph since I am not root
Oct 09 09:36:44 compute-2 ceph-mgr[6264]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Oct 09 09:36:44 compute-2 ceph-mgr[6264]: pidfile_write: ignore empty --pid-file
Oct 09 09:36:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'alerts'
Oct 09 09:36:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:44.174+0000 7fa874c23140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 09:36:44 compute-2 ceph-mgr[6264]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct 09 09:36:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'balancer'
Oct 09 09:36:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:44.246+0000 7fa874c23140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 09:36:44 compute-2 ceph-mgr[6264]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct 09 09:36:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'cephadm'
Oct 09 09:36:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'crash'
Oct 09 09:36:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:44.926+0000 7fa874c23140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 09:36:44 compute-2 ceph-mgr[6264]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct 09 09:36:44 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'dashboard'
Oct 09 09:36:44 compute-2 ceph-mon[5983]: from='mgr.14385 192.168.122.100:0/2520160453' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Oct 09 09:36:44 compute-2 ceph-mon[5983]: mgrmap e26: compute-0.lwqgfy(active, since 53s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:36:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:45.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'devicehealth'
Oct 09 09:36:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:45.469+0000 7fa874c23140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 09:36:45 compute-2 ceph-mgr[6264]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct 09 09:36:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'diskprediction_local'
Oct 09 09:36:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct 09 09:36:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct 09 09:36:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]:   from numpy import show_config as show_numpy_config
Oct 09 09:36:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:45.609+0000 7fa874c23140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 09:36:45 compute-2 ceph-mgr[6264]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct 09 09:36:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'influx'
Oct 09 09:36:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:45.671+0000 7fa874c23140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 09:36:45 compute-2 ceph-mgr[6264]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct 09 09:36:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'insights'
Oct 09 09:36:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'iostat'
Oct 09 09:36:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:45.794+0000 7fa874c23140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 09:36:45 compute-2 ceph-mgr[6264]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct 09 09:36:45 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'k8sevents'
Oct 09 09:36:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000016s ======
Oct 09 09:36:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:45.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct 09 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'localpool'
Oct 09 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'mds_autoscaler'
Oct 09 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'mirroring'
Oct 09 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'nfs'
Oct 09 09:36:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:46.657+0000 7fa874c23140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct 09 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'orchestrator'
Oct 09 09:36:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:46.846+0000 7fa874c23140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct 09 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'osd_perf_query'
Oct 09 09:36:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:46.913+0000 7fa874c23140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct 09 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'osd_support'
Oct 09 09:36:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:46.971+0000 7fa874c23140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct 09 09:36:46 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'pg_autoscaler'
Oct 09 09:36:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:47.040+0000 7fa874c23140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct 09 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'progress'
Oct 09 09:36:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:47.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:47.102+0000 7fa874c23140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct 09 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'prometheus'
Oct 09 09:36:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:47.401+0000 7fa874c23140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct 09 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rbd_support'
Oct 09 09:36:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:47.489+0000 7fa874c23140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct 09 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'restful'
Oct 09 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rgw'
Oct 09 09:36:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:47.870+0000 7fa874c23140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct 09 09:36:47 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'rook'
Oct 09 09:36:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:47.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:48.353+0000 7fa874c23140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'selftest'
Oct 09 09:36:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:48.415+0000 7fa874c23140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'snap_schedule'
Oct 09 09:36:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:48.488+0000 7fa874c23140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'stats'
Oct 09 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'status'
Oct 09 09:36:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:48.616+0000 7fa874c23140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'telegraf'
Oct 09 09:36:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:48.678+0000 7fa874c23140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'telemetry'
Oct 09 09:36:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:48.811+0000 7fa874c23140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct 09 09:36:48 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'test_orchestrator'
Oct 09 09:36:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:49.001+0000 7fa874c23140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 09:36:49 compute-2 ceph-mgr[6264]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct 09 09:36:49 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'volumes'
Oct 09 09:36:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:49.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:49.231+0000 7fa874c23140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 09:36:49 compute-2 ceph-mgr[6264]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct 09 09:36:49 compute-2 ceph-mgr[6264]: mgr[py] Loading python module 'zabbix'
Oct 09 09:36:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 2025-10-09T09:36:49.291+0000 7fa874c23140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 09:36:49 compute-2 ceph-mgr[6264]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct 09 09:36:49 compute-2 ceph-mgr[6264]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 09 09:36:49 compute-2 ceph-mgr[6264]: mgr load Constructed class from module: dashboard
Oct 09 09:36:49 compute-2 ceph-mgr[6264]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct 09 09:36:49 compute-2 ceph-mgr[6264]: mgr load Constructed class from module: prometheus
Oct 09 09:36:49 compute-2 ceph-mgr[6264]: [prometheus INFO root] server_addr: :: server_port: 9283
Oct 09 09:36:49 compute-2 ceph-mgr[6264]: [prometheus INFO root] Starting engine...
Oct 09 09:36:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: [09/Oct/2025:09:36:49] ENGINE Bus STARTING
Oct 09 09:36:49 compute-2 ceph-mgr[6264]: [prometheus INFO cherrypy.error] [09/Oct/2025:09:36:49] ENGINE Bus STARTING
Oct 09 09:36:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: CherryPy Checker:
Oct 09 09:36:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: The Application mounted at '' has an empty config.
Oct 09 09:36:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: 
Oct 09 09:36:49 compute-2 ceph-mgr[6264]: [dashboard INFO root] server: ssl=no host=192.168.122.102 port=8443
Oct 09 09:36:49 compute-2 ceph-mgr[6264]: [dashboard INFO root] Configured CherryPy, starting engine...
Oct 09 09:36:49 compute-2 ceph-mgr[6264]: ms_deliver_dispatch: unhandled message 0x563c932d3860 mon_map magic: 0 from mon.1 v2:192.168.122.102:3300/0
Oct 09 09:36:49 compute-2 ceph-mgr[6264]: [dashboard INFO root] Starting engine...
Oct 09 09:36:49 compute-2 ceph-mon[5983]: Standby manager daemon compute-1.etokpp restarted
Oct 09 09:36:49 compute-2 ceph-mon[5983]: Standby manager daemon compute-1.etokpp started
Oct 09 09:36:49 compute-2 ceph-mon[5983]: Standby manager daemon compute-2.takdnm restarted
Oct 09 09:36:49 compute-2 ceph-mon[5983]: Standby manager daemon compute-2.takdnm started
Oct 09 09:36:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:49 compute-2 ceph-mgr[6264]: [dashboard INFO root] Engine started...
Oct 09 09:36:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: [09/Oct/2025:09:36:49] ENGINE Serving on http://:::9283
Oct 09 09:36:49 compute-2 ceph-mgr[6264]: [prometheus INFO cherrypy.error] [09/Oct/2025:09:36:49] ENGINE Serving on http://:::9283
Oct 09 09:36:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mgr-compute-2-takdnm[6260]: [09/Oct/2025:09:36:49] ENGINE Bus STARTED
Oct 09 09:36:49 compute-2 ceph-mgr[6264]: [prometheus INFO cherrypy.error] [09/Oct/2025:09:36:49] ENGINE Bus STARTED
Oct 09 09:36:49 compute-2 ceph-mgr[6264]: [prometheus INFO root] Engine started.
Oct 09 09:36:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e41 e41: 3 total, 3 up, 3 in
Oct 09 09:36:49 compute-2 sshd-session[13873]: Accepted publickey for ceph-admin from 192.168.122.100 port 60998 ssh2: RSA SHA256:KIQR8fE5bpF/q/C5X5yWVfBmqI+cCPjsM63DCIxnpzU
Oct 09 09:36:49 compute-2 systemd-logind[800]: New session 20 of user ceph-admin.
Oct 09 09:36:49 compute-2 systemd[1]: Started Session 20 of User ceph-admin.
Oct 09 09:36:49 compute-2 sshd-session[13873]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Oct 09 09:36:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:49.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:49 compute-2 sudo[13877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:36:49 compute-2 sudo[13877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:49 compute-2 sudo[13877]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:50 compute-2 sudo[13902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 09:36:50 compute-2 sudo[13902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:50 compute-2 ceph-mon[5983]: mgrmap e27: compute-0.lwqgfy(active, since 58s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:36:50 compute-2 ceph-mon[5983]: Active manager daemon compute-0.lwqgfy restarted
Oct 09 09:36:50 compute-2 ceph-mon[5983]: Activating manager daemon compute-0.lwqgfy
Oct 09 09:36:50 compute-2 ceph-mon[5983]: osdmap e41: 3 total, 3 up, 3 in
Oct 09 09:36:50 compute-2 ceph-mon[5983]: mgrmap e28: compute-0.lwqgfy(active, starting, since 0.0142687s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct 09 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Oct 09 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Oct 09 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.zfggbi"}]: dispatch
Oct 09 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.wjwyle"}]: dispatch
Oct 09 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.svghvn"}]: dispatch
Oct 09 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-0.lwqgfy", "id": "compute-0.lwqgfy"}]: dispatch
Oct 09 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-2.takdnm", "id": "compute-2.takdnm"}]: dispatch
Oct 09 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr metadata", "who": "compute-1.etokpp", "id": "compute-1.etokpp"}]: dispatch
Oct 09 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct 09 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct 09 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct 09 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct 09 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct 09 09:36:50 compute-2 ceph-mon[5983]: Manager daemon compute-0.lwqgfy is now available
Oct 09 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/mirror_snapshot_schedule"}]: dispatch
Oct 09 09:36:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.lwqgfy/trash_purge_schedule"}]: dispatch
Oct 09 09:36:50 compute-2 podman[13986]: 2025-10-09 09:36:50.38594812 +0000 UTC m=+0.039719176 container exec 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:36:50 compute-2 podman[13986]: 2025-10-09 09:36:50.465188537 +0000 UTC m=+0.118959594 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Oct 09 09:36:50 compute-2 podman[14065]: 2025-10-09 09:36:50.714195842 +0000 UTC m=+0.037197558 container exec 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:36:50 compute-2 podman[14065]: 2025-10-09 09:36:50.721044312 +0000 UTC m=+0.044046018 container exec_died 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:36:51 compute-2 podman[14166]: 2025-10-09 09:36:51.010899868 +0000 UTC m=+0.036661975 container exec 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct 09 09:36:51 compute-2 podman[14184]: 2025-10-09 09:36:51.068940989 +0000 UTC m=+0.045560352 container exec_died 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct 09 09:36:51 compute-2 podman[14166]: 2025-10-09 09:36:51.071666424 +0000 UTC m=+0.097428511 container exec_died 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct 09 09:36:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:51.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:51 compute-2 podman[14217]: 2025-10-09 09:36:51.204529027 +0000 UTC m=+0.034379410 container exec a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, io.buildah.version=1.28.2, vcs-type=git, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=keepalived-container, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct 09 09:36:51 compute-2 podman[14217]: 2025-10-09 09:36:51.214041555 +0000 UTC m=+0.043891918 container exec_died a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, version=2.2.4, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, distribution-scope=public, description=keepalived for Ceph, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1793, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived)
Oct 09 09:36:51 compute-2 sudo[13902]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:51 compute-2 sudo[14242]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:36:51 compute-2 sudo[14242]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:51 compute-2 sudo[14242]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:51 compute-2 sudo[14267]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:36:51 compute-2 sudo[14267]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:51 compute-2 ceph-mon[5983]: mgrmap e29: compute-0.lwqgfy(active, since 1.02937s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:36:51 compute-2 ceph-mon[5983]: [09/Oct/2025:09:36:50] ENGINE Bus STARTING
Oct 09 09:36:51 compute-2 ceph-mon[5983]: [09/Oct/2025:09:36:50] ENGINE Serving on http://192.168.122.100:8765
Oct 09 09:36:51 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:51 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:51 compute-2 ceph-mon[5983]: [09/Oct/2025:09:36:51] ENGINE Serving on https://192.168.122.100:7150
Oct 09 09:36:51 compute-2 ceph-mon[5983]: [09/Oct/2025:09:36:51] ENGINE Bus STARTED
Oct 09 09:36:51 compute-2 ceph-mon[5983]: [09/Oct/2025:09:36:51] ENGINE Client ('192.168.122.100', 39912) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct 09 09:36:51 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:51 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:51 compute-2 sudo[14267]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:51 compute-2 sudo[14321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:36:51 compute-2 sudo[14321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:51 compute-2 sudo[14321]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:51 compute-2 sudo[14346]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 09 09:36:51 compute-2 sudo[14346]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:51.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:52 compute-2 sudo[14346]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:52 compute-2 sudo[14387]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:36:52 compute-2 sudo[14387]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:52 compute-2 sudo[14387]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:52 compute-2 sudo[14412]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -- inventory --format=json-pretty --filter-for-batch
Oct 09 09:36:52 compute-2 sudo[14412]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:52 compute-2 podman[14468]: 2025-10-09 09:36:52.357367571 +0000 UTC m=+0.027395149 container create 13c968a6fcd40f60274ec78a992959d810df2ee5b3dfa5878665fdf13d35ecc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_franklin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True)
Oct 09 09:36:52 compute-2 systemd[1]: Started libpod-conmon-13c968a6fcd40f60274ec78a992959d810df2ee5b3dfa5878665fdf13d35ecc5.scope.
Oct 09 09:36:52 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:36:52 compute-2 podman[14468]: 2025-10-09 09:36:52.418263043 +0000 UTC m=+0.088290621 container init 13c968a6fcd40f60274ec78a992959d810df2ee5b3dfa5878665fdf13d35ecc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_franklin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 09 09:36:52 compute-2 podman[14468]: 2025-10-09 09:36:52.422611906 +0000 UTC m=+0.092639484 container start 13c968a6fcd40f60274ec78a992959d810df2ee5b3dfa5878665fdf13d35ecc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 09 09:36:52 compute-2 podman[14468]: 2025-10-09 09:36:52.423868726 +0000 UTC m=+0.093896304 container attach 13c968a6fcd40f60274ec78a992959d810df2ee5b3dfa5878665fdf13d35ecc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_franklin, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:36:52 compute-2 gracious_franklin[14481]: 167 167
Oct 09 09:36:52 compute-2 systemd[1]: libpod-13c968a6fcd40f60274ec78a992959d810df2ee5b3dfa5878665fdf13d35ecc5.scope: Deactivated successfully.
Oct 09 09:36:52 compute-2 podman[14468]: 2025-10-09 09:36:52.425301588 +0000 UTC m=+0.095329166 container died 13c968a6fcd40f60274ec78a992959d810df2ee5b3dfa5878665fdf13d35ecc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_franklin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 09 09:36:52 compute-2 systemd[1]: var-lib-containers-storage-overlay-77b14ccaac014a0e6e816b29b44ce39f61e6f15deb1440d8a0442f348e379a4b-merged.mount: Deactivated successfully.
Oct 09 09:36:52 compute-2 podman[14468]: 2025-10-09 09:36:52.441386385 +0000 UTC m=+0.111413963 container remove 13c968a6fcd40f60274ec78a992959d810df2ee5b3dfa5878665fdf13d35ecc5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct 09 09:36:52 compute-2 podman[14468]: 2025-10-09 09:36:52.344605641 +0000 UTC m=+0.014633240 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:52 compute-2 systemd[1]: libpod-conmon-13c968a6fcd40f60274ec78a992959d810df2ee5b3dfa5878665fdf13d35ecc5.scope: Deactivated successfully.
Oct 09 09:36:52 compute-2 podman[14503]: 2025-10-09 09:36:52.551313763 +0000 UTC m=+0.025988046 container create afc23588e6fcb7e20fe25a47b11515111ba7c9812963e065e038c0762c7e0186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jang, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 09 09:36:52 compute-2 systemd[1]: Started libpod-conmon-afc23588e6fcb7e20fe25a47b11515111ba7c9812963e065e038c0762c7e0186.scope.
Oct 09 09:36:52 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:36:52 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e5a05d2f12f2fb9de5837eeb61d6fdfb15e5c0f9e1d56be0fe8e53b483648d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:52 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e5a05d2f12f2fb9de5837eeb61d6fdfb15e5c0f9e1d56be0fe8e53b483648d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:52 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e5a05d2f12f2fb9de5837eeb61d6fdfb15e5c0f9e1d56be0fe8e53b483648d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:52 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22e5a05d2f12f2fb9de5837eeb61d6fdfb15e5c0f9e1d56be0fe8e53b483648d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 09:36:52 compute-2 podman[14503]: 2025-10-09 09:36:52.603137455 +0000 UTC m=+0.077811758 container init afc23588e6fcb7e20fe25a47b11515111ba7c9812963e065e038c0762c7e0186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jang, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:36:52 compute-2 podman[14503]: 2025-10-09 09:36:52.610489401 +0000 UTC m=+0.085163684 container start afc23588e6fcb7e20fe25a47b11515111ba7c9812963e065e038c0762c7e0186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jang, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 09 09:36:52 compute-2 podman[14503]: 2025-10-09 09:36:52.611468747 +0000 UTC m=+0.086143031 container attach afc23588e6fcb7e20fe25a47b11515111ba7c9812963e065e038c0762c7e0186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jang, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 09 09:36:52 compute-2 podman[14503]: 2025-10-09 09:36:52.541299947 +0000 UTC m=+0.015974249 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:52 compute-2 ceph-mon[5983]: pgmap v4: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 09 09:36:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Oct 09 09:36:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:52 compute-2 ceph-mon[5983]: mgrmap e30: compute-0.lwqgfy(active, since 2s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:36:53 compute-2 hardcore_jang[14516]: [
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:     {
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:         "available": false,
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:         "being_replaced": false,
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:         "ceph_device_lvm": false,
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:         "lsm_data": {},
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:         "lvs": [],
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:         "path": "/dev/sr0",
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:         "rejected_reasons": [
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "Has a FileSystem",
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "Insufficient space (<5GB)"
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:         ],
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:         "sys_api": {
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "actuators": null,
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "device_nodes": [
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:                 "sr0"
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             ],
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "devname": "sr0",
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "human_readable_size": "474.00 KB",
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "id_bus": "ata",
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "model": "QEMU DVD-ROM",
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "nr_requests": "64",
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "parent": "/dev/sr0",
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "partitions": {},
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "path": "/dev/sr0",
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "removable": "1",
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "rev": "2.5+",
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "ro": "0",
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "rotational": "0",
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "sas_address": "",
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "sas_device_handle": "",
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "scheduler_mode": "mq-deadline",
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "sectors": 0,
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "sectorsize": "2048",
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "size": 485376.0,
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "support_discard": "2048",
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "type": "disk",
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:             "vendor": "QEMU"
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:         }
Oct 09 09:36:53 compute-2 hardcore_jang[14516]:     }
Oct 09 09:36:53 compute-2 hardcore_jang[14516]: ]
Oct 09 09:36:53 compute-2 systemd[1]: libpod-afc23588e6fcb7e20fe25a47b11515111ba7c9812963e065e038c0762c7e0186.scope: Deactivated successfully.
Oct 09 09:36:53 compute-2 podman[15522]: 2025-10-09 09:36:53.086687632 +0000 UTC m=+0.017785335 container died afc23588e6fcb7e20fe25a47b11515111ba7c9812963e065e038c0762c7e0186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 09:36:53 compute-2 systemd[1]: var-lib-containers-storage-overlay-22e5a05d2f12f2fb9de5837eeb61d6fdfb15e5c0f9e1d56be0fe8e53b483648d-merged.mount: Deactivated successfully.
Oct 09 09:36:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:53.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:53 compute-2 podman[15522]: 2025-10-09 09:36:53.106379772 +0000 UTC m=+0.037477465 container remove afc23588e6fcb7e20fe25a47b11515111ba7c9812963e065e038c0762c7e0186 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:36:53 compute-2 systemd[1]: libpod-conmon-afc23588e6fcb7e20fe25a47b11515111ba7c9812963e065e038c0762c7e0186.scope: Deactivated successfully.
Oct 09 09:36:53 compute-2 sudo[14412]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-2 sudo[15533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 09:36:53 compute-2 sudo[15533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-2 sudo[15533]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-2 sudo[15558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph
Oct 09 09:36:53 compute-2 sudo[15558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-2 sudo[15558]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-2 sudo[15583]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:36:53 compute-2 sudo[15583]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-2 sudo[15583]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-2 sudo[15608]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:36:53 compute-2 sudo[15608]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-2 sudo[15608]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-2 sudo[15633]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:36:53 compute-2 sudo[15633]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-2 sudo[15633]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-2 sudo[15681]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:36:53 compute-2 sudo[15681]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-2 sudo[15681]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-2 sudo[15706]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new
Oct 09 09:36:53 compute-2 sudo[15706]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-2 sudo[15706]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-2 sudo[15731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Oct 09 09:36:53 compute-2 sudo[15731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-2 sudo[15731]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-2 sudo[15756]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:36:53 compute-2 sudo[15756]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-2 sudo[15756]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-2 sudo[15781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:36:53 compute-2 sudo[15781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-2 sudo[15781]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-2 sudo[15806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:36:53 compute-2 sudo[15806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-2 sudo[15806]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-2 sudo[15831]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:36:53 compute-2 sudo[15831]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-2 sudo[15831]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-2 sudo[15856]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:36:53 compute-2 sudo[15856]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-2 sudo[15856]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 09 09:36:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 09 09:36:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:36:53 compute-2 sudo[15904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:36:53 compute-2 sudo[15904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-2 sudo[15904]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-2 sudo[15929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new
Oct 09 09:36:53 compute-2 sudo[15929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-2 sudo[15929]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-2 sudo[15954]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf.new /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:36:53 compute-2 sudo[15954]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-2 sudo[15954]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-2 sudo[15979]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Oct 09 09:36:53 compute-2 sudo[15979]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-2 sudo[15979]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:53.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:53 compute-2 sudo[16004]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph
Oct 09 09:36:53 compute-2 sudo[16004]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-2 sudo[16004]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:53 compute-2 sudo[16029]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:36:53 compute-2 sudo[16029]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:53 compute-2 sudo[16029]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-2 sudo[16054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:36:54 compute-2 sudo[16054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-2 sudo[16054]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-2 sudo[16079]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:36:54 compute-2 sudo[16079]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-2 sudo[16079]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-2 sudo[16127]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:36:54 compute-2 sudo[16127]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-2 sudo[16127]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-2 sudo[16153]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new
Oct 09 09:36:54 compute-2 sudo[16153]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-2 sudo[16153]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-2 sudo[16178]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Oct 09 09:36:54 compute-2 sudo[16178]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-2 sudo[16178]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-2 sudo[16203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:36:54 compute-2 sudo[16203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-2 sudo[16203]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-2 sudo[16228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config
Oct 09 09:36:54 compute-2 sudo[16228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-2 sudo[16228]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-2 sudo[16253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:36:54 compute-2 sudo[16253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-2 sudo[16253]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-2 sudo[16278]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:36:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e41 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:54 compute-2 sudo[16278]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-2 sudo[16278]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-2 sudo[16303]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:36:54 compute-2 sudo[16303]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-2 sudo[16303]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-2 sudo[16351]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:36:54 compute-2 sudo[16351]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-2 sudo[16351]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-2 sudo[16376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new
Oct 09 09:36:54 compute-2 sudo[16376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-2 sudo[16376]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-2 sudo[16401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv /tmp/cephadm-286f8bf0-da72-5823-9a4e-ac4457d9e609/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring.new /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:36:54 compute-2 sudo[16401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:54 compute-2 sudo[16401]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:54 compute-2 ceph-mon[5983]: Updating compute-0:/etc/ceph/ceph.conf
Oct 09 09:36:54 compute-2 ceph-mon[5983]: Updating compute-1:/etc/ceph/ceph.conf
Oct 09 09:36:54 compute-2 ceph-mon[5983]: Updating compute-2:/etc/ceph/ceph.conf
Oct 09 09:36:54 compute-2 ceph-mon[5983]: pgmap v5: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:54 compute-2 ceph-mon[5983]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:36:54 compute-2 ceph-mon[5983]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:36:54 compute-2 ceph-mon[5983]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.conf
Oct 09 09:36:54 compute-2 ceph-mon[5983]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Oct 09 09:36:54 compute-2 ceph-mon[5983]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct 09 09:36:54 compute-2 ceph-mon[5983]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Oct 09 09:36:54 compute-2 ceph-mon[5983]: mgrmap e31: compute-0.lwqgfy(active, since 4s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 09 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 09 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 09 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 09 09:36:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:36:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:55.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:36:55 compute-2 ceph-mon[5983]: Updating compute-2:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:36:55 compute-2 ceph-mon[5983]: Updating compute-0:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:36:55 compute-2 ceph-mon[5983]: Updating compute-1:/var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/config/ceph.client.admin.keyring
Oct 09 09:36:55 compute-2 ceph-mon[5983]: Failed to apply ingress.nfs.cephfs spec IngressSpec.from_json(yaml.safe_load('''service_type: ingress
                                          service_id: nfs.cephfs
                                          service_name: ingress.nfs.cephfs
                                          placement:
                                            hosts:
                                            - compute-0
                                            - compute-1
                                            - compute-2
                                          spec:
                                            backend_service: nfs.cephfs
                                            enable_haproxy_protocol: true
                                            first_virtual_router_id: 50
                                            frontend_port: 2049
                                            monitor_port: 9049
                                            virtual_ip: 192.168.122.2/24
                                          ''')): max() arg is an empty sequence
                                          Traceback (most recent call last):
                                            File "/usr/share/ceph/mgr/cephadm/serve.py", line 602, in _apply_all_services
                                              if self._apply_service(spec):
                                            File "/usr/share/ceph/mgr/cephadm/serve.py", line 947, in _apply_service
                                              daemon_spec = svc.prepare_create(daemon_spec)
                                            File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 46, in prepare_create
                                              return self.haproxy_prepare_create(daemon_spec)
                                            File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 74, in haproxy_prepare_create
                                              daemon_spec.final_config, daemon_spec.deps = self.haproxy_generate_config(daemon_spec)
                                            File "/usr/share/ceph/mgr/cephadm/services/ingress.py", line 139, in haproxy_generate_config
                                              num_ranks = 1 + max(by_rank.keys())
                                          ValueError: max() arg is an empty sequence
Oct 09 09:36:55 compute-2 ceph-mon[5983]: pgmap v6: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:36:55 compute-2 ceph-mon[5983]: Creating key for client.nfs.cephfs.0.0.compute-1.douegr
Oct 09 09:36:55 compute-2 ceph-mon[5983]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Oct 09 09:36:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 09 09:36:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 09 09:36:55 compute-2 ceph-mon[5983]: Rados config object exists: conf-nfs.cephfs
Oct 09 09:36:55 compute-2 ceph-mon[5983]: Creating key for client.nfs.cephfs.0.0.compute-1.douegr-rgw
Oct 09 09:36:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 09 09:36:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.douegr-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 09 09:36:55 compute-2 ceph-mon[5983]: Bind address in nfs.cephfs.0.0.compute-1.douegr's ganesha conf is defaulting to empty
Oct 09 09:36:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:55 compute-2 ceph-mon[5983]: Deploying daemon nfs.cephfs.0.0.compute-1.douegr on compute-1
Oct 09 09:36:55 compute-2 ceph-mon[5983]: Health check failed: Failed to apply 1 service(s): ingress.nfs.cephfs (CEPHADM_APPLY_SPEC_FAIL)
Oct 09 09:36:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:55.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:36:56 compute-2 ceph-mon[5983]: Creating key for client.nfs.cephfs.1.0.compute-2.cpioam
Oct 09 09:36:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 09 09:36:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 09 09:36:56 compute-2 ceph-mon[5983]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Oct 09 09:36:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 09 09:36:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 09 09:36:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:36:56 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e42 e42: 3 total, 3 up, 3 in
Oct 09 09:36:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:57.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:57.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:57 compute-2 ceph-mon[5983]: pgmap v7: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 330 B/s wr, 12 op/s
Oct 09 09:36:57 compute-2 ceph-mon[5983]: osdmap e42: 3 total, 3 up, 3 in
Oct 09 09:36:58 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 e43: 3 total, 3 up, 3 in
Oct 09 09:36:59 compute-2 ceph-mon[5983]: osdmap e43: 3 total, 3 up, 3 in
Oct 09 09:36:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:36:59.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:59 compute-2 sudo[16428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:36:59 compute-2 sudo[16428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:59 compute-2 sudo[16428]: pam_unix(sudo:session): session closed for user root
Oct 09 09:36:59 compute-2 sudo[16453]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:36:59 compute-2 sudo[16453]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:36:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:36:59 compute-2 podman[16515]: 2025-10-09 09:36:59.571588428 +0000 UTC m=+0.042799462 container create b9b0dfff2e84c7994227cde790e115531d8bba81d50d54907c545261c9f989b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_dubinsky, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 09:36:59 compute-2 systemd[1]: Started libpod-conmon-b9b0dfff2e84c7994227cde790e115531d8bba81d50d54907c545261c9f989b7.scope.
Oct 09 09:36:59 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:36:59 compute-2 podman[16515]: 2025-10-09 09:36:59.634185629 +0000 UTC m=+0.105396674 container init b9b0dfff2e84c7994227cde790e115531d8bba81d50d54907c545261c9f989b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_dubinsky, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:36:59 compute-2 podman[16515]: 2025-10-09 09:36:59.63943006 +0000 UTC m=+0.110641095 container start b9b0dfff2e84c7994227cde790e115531d8bba81d50d54907c545261c9f989b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_dubinsky, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:36:59 compute-2 podman[16515]: 2025-10-09 09:36:59.640948594 +0000 UTC m=+0.112159629 container attach b9b0dfff2e84c7994227cde790e115531d8bba81d50d54907c545261c9f989b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_dubinsky, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 09:36:59 compute-2 jovial_dubinsky[16528]: 167 167
Oct 09 09:36:59 compute-2 systemd[1]: libpod-b9b0dfff2e84c7994227cde790e115531d8bba81d50d54907c545261c9f989b7.scope: Deactivated successfully.
Oct 09 09:36:59 compute-2 conmon[16528]: conmon b9b0dfff2e84c7994227 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b9b0dfff2e84c7994227cde790e115531d8bba81d50d54907c545261c9f989b7.scope/container/memory.events
Oct 09 09:36:59 compute-2 podman[16515]: 2025-10-09 09:36:59.643511168 +0000 UTC m=+0.114722293 container died b9b0dfff2e84c7994227cde790e115531d8bba81d50d54907c545261c9f989b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_dubinsky, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct 09 09:36:59 compute-2 podman[16515]: 2025-10-09 09:36:59.555210168 +0000 UTC m=+0.026421213 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:36:59 compute-2 systemd[1]: var-lib-containers-storage-overlay-d0d0462144776940901ae48d3e0fab9a51ccc38b79b2ec79bf4dc91101dcf235-merged.mount: Deactivated successfully.
Oct 09 09:36:59 compute-2 podman[16515]: 2025-10-09 09:36:59.662524126 +0000 UTC m=+0.133735161 container remove b9b0dfff2e84c7994227cde790e115531d8bba81d50d54907c545261c9f989b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 09:36:59 compute-2 systemd[1]: libpod-conmon-b9b0dfff2e84c7994227cde790e115531d8bba81d50d54907c545261c9f989b7.scope: Deactivated successfully.
Oct 09 09:36:59 compute-2 systemd[1]: Reloading.
Oct 09 09:36:59 compute-2 systemd-rc-local-generator[16564]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:36:59 compute-2 systemd-sysv-generator[16569]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:36:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:36:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:36:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:36:59.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:36:59 compute-2 systemd[1]: Reloading.
Oct 09 09:37:00 compute-2 systemd-rc-local-generator[16610]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:37:00 compute-2 systemd-sysv-generator[16613]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:37:00 compute-2 ceph-mon[5983]: pgmap v10: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 283 B/s wr, 10 op/s
Oct 09 09:37:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 09 09:37:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 09 09:37:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 09 09:37:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.cpioam-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 09 09:37:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:00 compute-2 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:37:00 compute-2 podman[16658]: 2025-10-09 09:37:00.343970736 +0000 UTC m=+0.032493446 container create 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:37:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6547798e8d0d0c4a1a36d3bb36bd013818446bbe57fa85f789913a590475d7/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 09 09:37:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6547798e8d0d0c4a1a36d3bb36bd013818446bbe57fa85f789913a590475d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:37:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6547798e8d0d0c4a1a36d3bb36bd013818446bbe57fa85f789913a590475d7/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:37:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a6547798e8d0d0c4a1a36d3bb36bd013818446bbe57fa85f789913a590475d7/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-2.cpioam-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:37:00 compute-2 podman[16658]: 2025-10-09 09:37:00.398419889 +0000 UTC m=+0.086942599 container init 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1)
Oct 09 09:37:00 compute-2 podman[16658]: 2025-10-09 09:37:00.405935594 +0000 UTC m=+0.094458304 container start 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Oct 09 09:37:00 compute-2 bash[16658]: 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212
Oct 09 09:37:00 compute-2 podman[16658]: 2025-10-09 09:37:00.330292999 +0000 UTC m=+0.018815729 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:37:00 compute-2 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 09 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 09 09:37:00 compute-2 sudo[16453]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 09 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 09 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 09 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 09 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 09 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct 09 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct 09 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:37:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:00 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:37:01 compute-2 ceph-mon[5983]: Rados config object exists: conf-nfs.cephfs
Oct 09 09:37:01 compute-2 ceph-mon[5983]: Creating key for client.nfs.cephfs.1.0.compute-2.cpioam-rgw
Oct 09 09:37:01 compute-2 ceph-mon[5983]: Bind address in nfs.cephfs.1.0.compute-2.cpioam's ganesha conf is defaulting to empty
Oct 09 09:37:01 compute-2 ceph-mon[5983]: Deploying daemon nfs.cephfs.1.0.compute-2.cpioam on compute-2
Oct 09 09:37:01 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:01 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:01 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:01 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Oct 09 09:37:01 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Oct 09 09:37:01 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Oct 09 09:37:01 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Oct 09 09:37:01 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:37:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:01.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:37:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:01.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:02 compute-2 ceph-mon[5983]: Creating key for client.nfs.cephfs.2.0.compute-0.rlqbpy
Oct 09 09:37:02 compute-2 ceph-mon[5983]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Oct 09 09:37:02 compute-2 ceph-mon[5983]: pgmap v11: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 13 op/s
Oct 09 09:37:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:03.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:03.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:04 compute-2 ceph-mon[5983]: pgmap v12: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.1 KiB/s wr, 12 op/s
Oct 09 09:37:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Oct 09 09:37:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Oct 09 09:37:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct 09 09:37:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.rlqbpy-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct 09 09:37:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:05 compute-2 ceph-mon[5983]: Rados config object exists: conf-nfs.cephfs
Oct 09 09:37:05 compute-2 ceph-mon[5983]: Creating key for client.nfs.cephfs.2.0.compute-0.rlqbpy-rgw
Oct 09 09:37:05 compute-2 ceph-mon[5983]: Bind address in nfs.cephfs.2.0.compute-0.rlqbpy's ganesha conf is defaulting to empty
Oct 09 09:37:05 compute-2 ceph-mon[5983]: Deploying daemon nfs.cephfs.2.0.compute-0.rlqbpy on compute-0
Oct 09 09:37:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:37:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:37:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:37:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:05.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:05.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #13. Immutable memtables: 0.
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.044410) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 13
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626044515, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 5741, "num_deletes": 259, "total_data_size": 19312681, "memory_usage": 20425528, "flush_reason": "Manual Compaction"}
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #14: started
Oct 09 09:37:06 compute-2 ceph-mon[5983]: pgmap v13: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 895 B/s wr, 2 op/s
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626070152, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 14, "file_size": 12329919, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 6, "largest_seqno": 5746, "table_properties": {"data_size": 12308316, "index_size": 13617, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6917, "raw_key_size": 66691, "raw_average_key_size": 24, "raw_value_size": 12254607, "raw_average_value_size": 4449, "num_data_blocks": 604, "num_entries": 2754, "num_filter_entries": 2754, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 1760002514, "file_creation_time": 1760002626, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 14, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 25784 microseconds, and 18579 cpu microseconds.
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.070204) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #14: 12329919 bytes OK
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.070224) [db/memtable_list.cc:519] [default] Level-0 commit table #14 started
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.070644) [db/memtable_list.cc:722] [default] Level-0 commit table #14: memtable #1 done
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.070658) EVENT_LOG_v1 {"time_micros": 1760002626070654, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [2, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.070672) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[2 0 0 0 0 0 0] max score 0.50
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 19281571, prev total WAL file size 19283475, number of live WAL files 2.
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.073383) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323534' seq:0, type:0; will stop at (end)
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 2@0 files to L6, score -1.00
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [14(11MB) 8(1648B)]
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626073482, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [14, 8], "score": -1, "input_data_size": 12331567, "oldest_snapshot_seqno": -1}
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #15: 2498 keys, 12325993 bytes, temperature: kUnknown
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626101831, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 15, "file_size": 12325993, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12305072, "index_size": 13580, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6277, "raw_key_size": 63134, "raw_average_key_size": 25, "raw_value_size": 12254665, "raw_average_value_size": 4905, "num_data_blocks": 602, "num_entries": 2498, "num_filter_entries": 2498, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760002626, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 15, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.102167) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 2@0 files to L6 => 12325993 bytes
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.102574) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 432.0 rd, 431.8 wr, level 6, files in(2, 0) out(1 +0 blob) MB in(11.8, 0.0 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 2759, records dropped: 261 output_compression: NoCompression
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.102591) EVENT_LOG_v1 {"time_micros": 1760002626102582, "job": 4, "event": "compaction_finished", "compaction_time_micros": 28545, "compaction_time_cpu_micros": 20374, "output_level": 6, "num_output_files": 1, "total_output_size": 12325993, "num_input_records": 2759, "num_output_records": 2498, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000014.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626104623, "job": 4, "event": "table_file_deletion", "file_number": 14}
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002626104852, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct 09 09:37:06 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:06.073317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000002:nfs.cephfs.1: -2
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:37:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:06 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 09 09:37:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:07.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:37:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:07.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:37:08 compute-2 ceph-mon[5983]: pgmap v14: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 1.9 KiB/s wr, 5 op/s
Oct 09 09:37:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:08 compute-2 sudo[16728]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:37:08 compute-2 sudo[16728]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:08 compute-2 sudo[16728]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:08 compute-2 sudo[16754]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:37:08 compute-2 sudo[16754]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:08 compute-2 sudo[16754]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:08 compute-2 sudo[16779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:37:08 compute-2 sudo[16779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:08 compute-2 sudo[16779]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:08 compute-2 sudo[16804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 09:37:08 compute-2 sudo[16804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:08 compute-2 podman[16884]: 2025-10-09 09:37:08.786598754 +0000 UTC m=+0.056258858 container exec 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 09 09:37:08 compute-2 podman[16884]: 2025-10-09 09:37:08.884239789 +0000 UTC m=+0.153899882 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:37:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:09.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:09 compute-2 podman[16964]: 2025-10-09 09:37:09.206057171 +0000 UTC m=+0.039956571 container exec 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:37:09 compute-2 podman[16964]: 2025-10-09 09:37:09.215108703 +0000 UTC m=+0.049008082 container exec_died 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:37:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:09 compute-2 podman[17065]: 2025-10-09 09:37:09.6093714 +0000 UTC m=+0.044963834 container exec 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct 09 09:37:09 compute-2 podman[17065]: 2025-10-09 09:37:09.615009705 +0000 UTC m=+0.050602119 container exec_died 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct 09 09:37:09 compute-2 podman[17118]: 2025-10-09 09:37:09.774686021 +0000 UTC m=+0.038175151 container exec a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, release=1793, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, name=keepalived, architecture=x86_64, distribution-scope=public, vcs-type=git, io.openshift.tags=Ceph keepalived, version=2.2.4, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct 09 09:37:09 compute-2 podman[17118]: 2025-10-09 09:37:09.786066375 +0000 UTC m=+0.049555485 container exec_died a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.buildah.version=1.28.2, architecture=x86_64, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, version=2.2.4, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., io.openshift.expose-services=, name=keepalived, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2023-02-22T09:23:20)
Oct 09 09:37:09 compute-2 podman[17159]: 2025-10-09 09:37:09.921733389 +0000 UTC m=+0.048048101 container exec 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:37:09 compute-2 podman[17159]: 2025-10-09 09:37:09.931079697 +0000 UTC m=+0.057394398 container exec_died 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Oct 09 09:37:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:09.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:09 compute-2 sudo[16804]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:10 compute-2 ceph-mon[5983]: pgmap v15: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Oct 09 09:37:10 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:10 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:10 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:10 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:10 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:11.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:37:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:11 compute-2 ceph-mon[5983]: pgmap v16: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1.6 KiB/s wr, 4 op/s
Oct 09 09:37:11 compute-2 ceph-mon[5983]: Deploying daemon haproxy.nfs.cephfs.compute-1.oqhtjo on compute-1
Oct 09 09:37:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:37:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:11.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:37:12 compute-2 ceph-mon[5983]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 1 service(s): ingress.nfs.cephfs)
Oct 09 09:37:12 compute-2 ceph-mon[5983]: Cluster is now healthy
Oct 09 09:37:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:13.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:13 compute-2 ceph-mon[5983]: pgmap v17: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.9 KiB/s wr, 7 op/s
Oct 09 09:37:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:13.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:14 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bf0000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:14 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:14 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:14 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:14 compute-2 ceph-mon[5983]: Deploying daemon haproxy.nfs.cephfs.compute-0.ujrhwc on compute-0
Oct 09 09:37:14 compute-2 sudo[17191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:37:14 compute-2 sudo[17191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:14 compute-2 sudo[17191]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:14 compute-2 sudo[17216]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/haproxy:2.3 --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:37:14 compute-2 sudo[17216]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:15.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:15 compute-2 podman[17276]: 2025-10-09 09:37:15.194713568 +0000 UTC m=+0.029223006 container create cf70dc9ec5c63f7763d025011e29702e614b58c033a49a22a377acde47052b15 (image=quay.io/ceph/haproxy:2.3, name=silly_mayer)
Oct 09 09:37:15 compute-2 systemd[1]: Started libpod-conmon-cf70dc9ec5c63f7763d025011e29702e614b58c033a49a22a377acde47052b15.scope.
Oct 09 09:37:15 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:37:15 compute-2 podman[17276]: 2025-10-09 09:37:15.271511813 +0000 UTC m=+0.106021270 container init cf70dc9ec5c63f7763d025011e29702e614b58c033a49a22a377acde47052b15 (image=quay.io/ceph/haproxy:2.3, name=silly_mayer)
Oct 09 09:37:15 compute-2 podman[17276]: 2025-10-09 09:37:15.276943547 +0000 UTC m=+0.111452986 container start cf70dc9ec5c63f7763d025011e29702e614b58c033a49a22a377acde47052b15 (image=quay.io/ceph/haproxy:2.3, name=silly_mayer)
Oct 09 09:37:15 compute-2 podman[17276]: 2025-10-09 09:37:15.278447103 +0000 UTC m=+0.112956561 container attach cf70dc9ec5c63f7763d025011e29702e614b58c033a49a22a377acde47052b15 (image=quay.io/ceph/haproxy:2.3, name=silly_mayer)
Oct 09 09:37:15 compute-2 podman[17276]: 2025-10-09 09:37:15.182444128 +0000 UTC m=+0.016953586 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 09 09:37:15 compute-2 silly_mayer[17289]: 0 0
Oct 09 09:37:15 compute-2 systemd[1]: libpod-cf70dc9ec5c63f7763d025011e29702e614b58c033a49a22a377acde47052b15.scope: Deactivated successfully.
Oct 09 09:37:15 compute-2 podman[17276]: 2025-10-09 09:37:15.281688357 +0000 UTC m=+0.116197795 container died cf70dc9ec5c63f7763d025011e29702e614b58c033a49a22a377acde47052b15 (image=quay.io/ceph/haproxy:2.3, name=silly_mayer)
Oct 09 09:37:15 compute-2 systemd[1]: var-lib-containers-storage-overlay-56464be254efef73de856d2c5f47a0165c4aa2921d827a7ee89356e7e23458d2-merged.mount: Deactivated successfully.
Oct 09 09:37:15 compute-2 podman[17276]: 2025-10-09 09:37:15.304270738 +0000 UTC m=+0.138780175 container remove cf70dc9ec5c63f7763d025011e29702e614b58c033a49a22a377acde47052b15 (image=quay.io/ceph/haproxy:2.3, name=silly_mayer)
Oct 09 09:37:15 compute-2 systemd[1]: libpod-conmon-cf70dc9ec5c63f7763d025011e29702e614b58c033a49a22a377acde47052b15.scope: Deactivated successfully.
Oct 09 09:37:15 compute-2 systemd[1]: Reloading.
Oct 09 09:37:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:15 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4001e10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:15 compute-2 systemd-sysv-generator[17333]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:37:15 compute-2 systemd-rc-local-generator[17330]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:37:15 compute-2 systemd[1]: Reloading.
Oct 09 09:37:15 compute-2 systemd-sysv-generator[17378]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:37:15 compute-2 systemd-rc-local-generator[17374]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:37:15 compute-2 ceph-mon[5983]: pgmap v18: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.9 KiB/s wr, 7 op/s
Oct 09 09:37:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:15 compute-2 ceph-mon[5983]: Deploying daemon haproxy.nfs.cephfs.compute-2.iyubhq on compute-2
Oct 09 09:37:15 compute-2 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-2.iyubhq for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:37:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:15.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:16 compute-2 podman[17423]: 2025-10-09 09:37:16.034510303 +0000 UTC m=+0.031441241 container create 64d13f02344a1c83598fed6ff2d40d88b8e580c996302cd4f033291404b26110 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq)
Oct 09 09:37:16 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e7b1420b5cf7143ab4f3c5e5dc855f602e24e738357efff898f85e0792ec0e22/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Oct 09 09:37:16 compute-2 podman[17423]: 2025-10-09 09:37:16.076776298 +0000 UTC m=+0.073707246 container init 64d13f02344a1c83598fed6ff2d40d88b8e580c996302cd4f033291404b26110 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq)
Oct 09 09:37:16 compute-2 podman[17423]: 2025-10-09 09:37:16.080636498 +0000 UTC m=+0.077567436 container start 64d13f02344a1c83598fed6ff2d40d88b8e580c996302cd4f033291404b26110 (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq)
Oct 09 09:37:16 compute-2 bash[17423]: 64d13f02344a1c83598fed6ff2d40d88b8e580c996302cd4f033291404b26110
Oct 09 09:37:16 compute-2 podman[17423]: 2025-10-09 09:37:16.02226161 +0000 UTC m=+0.019192569 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Oct 09 09:37:16 compute-2 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-2.iyubhq for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:37:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [NOTICE] 281/093716 (2) : New worker #1 (4) forked
Oct 09 09:37:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093716 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:37:16 compute-2 sudo[17216]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:16 compute-2 sudo[17446]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:37:16 compute-2 sudo[17446]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:16 compute-2 sudo[17446]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:16 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:16 compute-2 sudo[17471]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/keepalived:2.2.4 --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:37:16 compute-2 sudo[17471]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:16 compute-2 podman[17530]: 2025-10-09 09:37:16.606012763 +0000 UTC m=+0.040425206 container create 96d5182dbb98376f6e362b621f7b96ae88f07bbc54d996941f5f04098d8b979a (image=quay.io/ceph/keepalived:2.2.4, name=lucid_allen, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, distribution-scope=public, name=keepalived, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.expose-services=, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, version=2.2.4)
Oct 09 09:37:16 compute-2 systemd[1]: Started libpod-conmon-96d5182dbb98376f6e362b621f7b96ae88f07bbc54d996941f5f04098d8b979a.scope.
Oct 09 09:37:16 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:37:16 compute-2 podman[17530]: 2025-10-09 09:37:16.673493062 +0000 UTC m=+0.107905516 container init 96d5182dbb98376f6e362b621f7b96ae88f07bbc54d996941f5f04098d8b979a (image=quay.io/ceph/keepalived:2.2.4, name=lucid_allen, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, vendor=Red Hat, Inc., vcs-type=git, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, version=2.2.4, release=1793, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph)
Oct 09 09:37:16 compute-2 podman[17530]: 2025-10-09 09:37:16.679915526 +0000 UTC m=+0.114327959 container start 96d5182dbb98376f6e362b621f7b96ae88f07bbc54d996941f5f04098d8b979a (image=quay.io/ceph/keepalived:2.2.4, name=lucid_allen, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, description=keepalived for Ceph, version=2.2.4, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, vendor=Red Hat, Inc.)
Oct 09 09:37:16 compute-2 podman[17530]: 2025-10-09 09:37:16.681182265 +0000 UTC m=+0.115594699 container attach 96d5182dbb98376f6e362b621f7b96ae88f07bbc54d996941f5f04098d8b979a (image=quay.io/ceph/keepalived:2.2.4, name=lucid_allen, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.28.2, version=2.2.4, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, release=1793)
Oct 09 09:37:16 compute-2 lucid_allen[17543]: 0 0
Oct 09 09:37:16 compute-2 systemd[1]: libpod-96d5182dbb98376f6e362b621f7b96ae88f07bbc54d996941f5f04098d8b979a.scope: Deactivated successfully.
Oct 09 09:37:16 compute-2 conmon[17543]: conmon 96d5182dbb98376f6e36 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-96d5182dbb98376f6e362b621f7b96ae88f07bbc54d996941f5f04098d8b979a.scope/container/memory.events
Oct 09 09:37:16 compute-2 podman[17530]: 2025-10-09 09:37:16.685150299 +0000 UTC m=+0.119562731 container died 96d5182dbb98376f6e362b621f7b96ae88f07bbc54d996941f5f04098d8b979a (image=quay.io/ceph/keepalived:2.2.4, name=lucid_allen, description=keepalived for Ceph, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, distribution-scope=public, name=keepalived, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, vendor=Red Hat, Inc.)
Oct 09 09:37:16 compute-2 podman[17530]: 2025-10-09 09:37:16.593453475 +0000 UTC m=+0.027865928 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 09 09:37:16 compute-2 systemd[1]: var-lib-containers-storage-overlay-c896448760f11553d02f71f05001b6463caea94839919dbca1cd253a2a9b6cf0-merged.mount: Deactivated successfully.
Oct 09 09:37:16 compute-2 podman[17530]: 2025-10-09 09:37:16.70754713 +0000 UTC m=+0.141959564 container remove 96d5182dbb98376f6e362b621f7b96ae88f07bbc54d996941f5f04098d8b979a (image=quay.io/ceph/keepalived:2.2.4, name=lucid_allen, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, release=1793, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=)
Oct 09 09:37:16 compute-2 systemd[1]: libpod-conmon-96d5182dbb98376f6e362b621f7b96ae88f07bbc54d996941f5f04098d8b979a.scope: Deactivated successfully.
Oct 09 09:37:16 compute-2 systemd[1]: Reloading.
Oct 09 09:37:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:16 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be0004000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:16 compute-2 systemd-rc-local-generator[17580]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:37:16 compute-2 systemd-sysv-generator[17585]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:37:17 compute-2 systemd[1]: Reloading.
Oct 09 09:37:17 compute-2 systemd-rc-local-generator[17623]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:37:17 compute-2 systemd-sysv-generator[17626]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:37:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:37:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:17.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:37:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:17 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 09 09:37:17 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 09 09:37:17 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 09 09:37:17 compute-2 ceph-mon[5983]: Deploying daemon keepalived.nfs.cephfs.compute-2.dgxvnq on compute-2
Oct 09 09:37:17 compute-2 ceph-mon[5983]: pgmap v19: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 4.8 KiB/s rd, 1.9 KiB/s wr, 7 op/s
Oct 09 09:37:17 compute-2 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-2.dgxvnq for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:37:17 compute-2 podman[17679]: 2025-10-09 09:37:17.392930524 +0000 UTC m=+0.030407620 container create 7c12db35d6bc4ca0b3becec2c7073fd818bdbb324e21733ab0d4bc9d12778f9f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, build-date=2023-02-22T09:23:20, version=2.2.4, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph)
Oct 09 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:17 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4002a90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:17 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4679a2aa40a070805f9d9565e83053f2661c47dd86137a73c7583a1cb944b8e/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:37:17 compute-2 podman[17679]: 2025-10-09 09:37:17.438737899 +0000 UTC m=+0.076214995 container init 7c12db35d6bc4ca0b3becec2c7073fd818bdbb324e21733ab0d4bc9d12778f9f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, version=2.2.4, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 09 09:37:17 compute-2 podman[17679]: 2025-10-09 09:37:17.4426722 +0000 UTC m=+0.080149286 container start 7c12db35d6bc4ca0b3becec2c7073fd818bdbb324e21733ab0d4bc9d12778f9f (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, description=keepalived for Ceph, io.openshift.expose-services=, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64, vcs-type=git, build-date=2023-02-22T09:23:20, distribution-scope=public, com.redhat.component=keepalived-container, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Oct 09 09:37:17 compute-2 bash[17679]: 7c12db35d6bc4ca0b3becec2c7073fd818bdbb324e21733ab0d4bc9d12778f9f
Oct 09 09:37:17 compute-2 podman[17679]: 2025-10-09 09:37:17.381532166 +0000 UTC m=+0.019009282 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Oct 09 09:37:17 compute-2 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-2.dgxvnq for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: Starting Keepalived v2.2.4 (08/21,2021)
Oct 09 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: Running on Linux 5.14.0-620.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025 (built for Linux 5.14.0)
Oct 09 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Oct 09 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: Configuration file /etc/keepalived/keepalived.conf
Oct 09 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: Failed to bind to process monitoring socket - errno 98 - Address already in use
Oct 09 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Oct 09 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: Starting VRRP child process, pid=4
Oct 09 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: Startup complete
Oct 09 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: (VI_0) Entering BACKUP STATE (init)
Oct 09 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: VRRP_Script(check_backend) succeeded
Oct 09 09:37:17 compute-2 sudo[17471]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:37:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:17.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:37:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:18 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4002a90 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:18 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:18 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:18 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:18 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 09 09:37:18 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 09 09:37:18 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 09 09:37:18 compute-2 ceph-mon[5983]: Deploying daemon keepalived.nfs.cephfs.compute-1.zabdum on compute-1
Oct 09 09:37:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:18 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec0023e0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:19.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:19 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be0004000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:19 compute-2 ceph-mon[5983]: pgmap v20: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 973 B/s wr, 4 op/s
Oct 09 09:37:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:19.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:20 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be40037a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:37:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:20 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be40037a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:21 2025: (VI_0) Entering MASTER STATE
Oct 09 09:37:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:37:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:21.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:37:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:21 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be40037a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:21 compute-2 ceph-mon[5983]: pgmap v21: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 973 B/s wr, 4 op/s
Oct 09 09:37:21 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:21 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:21 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:21.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:22 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be0004000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:22 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Oct 09 09:37:22 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Oct 09 09:37:22 compute-2 ceph-mon[5983]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Oct 09 09:37:22 compute-2 ceph-mon[5983]: Deploying daemon keepalived.nfs.cephfs.compute-0.qjivil on compute-0
Oct 09 09:37:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:37:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:37:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:22 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be0004000 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:23.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:23 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:23 compute-2 ceph-mon[5983]: pgmap v22: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 938 B/s wr, 4 op/s
Oct 09 09:37:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:37:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:23.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:37:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:24 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:24 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:24 2025: (VI_0) Received advert from 192.168.122.101 with lower priority 90, ours 90, forcing new election
Oct 09 09:37:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:25.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:25 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be00054f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:25 compute-2 ceph-mon[5983]: pgmap v23: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:37:25 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:25 compute-2 sudo[17707]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:37:25 compute-2 sudo[17707]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:25 compute-2 sudo[17707]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:25 compute-2 sudo[17732]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:37:25 compute-2 sudo[17732]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:25 compute-2 sudo[17732]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:25 compute-2 sshd-session[17744]: Accepted publickey for zuul from 192.168.122.30 port 60658 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:37:25 compute-2 systemd-logind[800]: New session 21 of user zuul.
Oct 09 09:37:25 compute-2 systemd[1]: Started Session 21 of User zuul.
Oct 09 09:37:25 compute-2 sudo[17759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 09:37:25 compute-2 sshd-session[17744]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:37:25 compute-2 sudo[17759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:37:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:25.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:37:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:26 2025: (VI_0) Master received advert from 192.168.122.100 with higher priority 100, ours 90
Oct 09 09:37:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:26 2025: (VI_0) Entering BACKUP STATE
Oct 09 09:37:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:26 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:26 compute-2 podman[17895]: 2025-10-09 09:37:26.331555395 +0000 UTC m=+0.041545367 container exec 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:37:26 compute-2 podman[17895]: 2025-10-09 09:37:26.451103358 +0000 UTC m=+0.161093310 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:37:26 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:26 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:26 2025: (VI_0) Entering MASTER STATE
Oct 09 09:37:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:26 2025: (VI_0) Master received advert from 192.168.122.100 with higher priority 100, ours 90
Oct 09 09:37:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:26 2025: (VI_0) Entering BACKUP STATE
Oct 09 09:37:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:26 compute-2 podman[18074]: 2025-10-09 09:37:26.738997178 +0000 UTC m=+0.042117856 container exec 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:37:26 compute-2 podman[18074]: 2025-10-09 09:37:26.755282144 +0000 UTC m=+0.058402820 container exec_died 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:37:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:26 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:26 compute-2 python3.9[18024]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:37:27 compute-2 podman[18195]: 2025-10-09 09:37:27.137417839 +0000 UTC m=+0.051183656 container exec 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct 09 09:37:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:37:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:27.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:37:27 compute-2 podman[18195]: 2025-10-09 09:37:27.143478719 +0000 UTC m=+0.057244546 container exec_died 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct 09 09:37:27 compute-2 podman[18260]: 2025-10-09 09:37:27.386055721 +0000 UTC m=+0.047787189 container exec a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, io.buildah.version=1.28.2, release=1793, com.redhat.component=keepalived-container, distribution-scope=public, io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., version=2.2.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, build-date=2023-02-22T09:23:20, name=keepalived, description=keepalived for Ceph)
Oct 09 09:37:27 compute-2 podman[18260]: 2025-10-09 09:37:27.424184524 +0000 UTC m=+0.085915991 container exec_died a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, vcs-type=git, vendor=Red Hat, Inc., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, description=keepalived for Ceph, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=2.2.4, io.buildah.version=1.28.2, release=1793, architecture=x86_64, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, name=keepalived, distribution-scope=public)
Oct 09 09:37:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:27 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:27 compute-2 podman[18334]: 2025-10-09 09:37:27.594144035 +0000 UTC m=+0.057035883 container exec 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct 09 09:37:27 compute-2 podman[18334]: 2025-10-09 09:37:27.605070212 +0000 UTC m=+0.067962050 container exec_died 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 09 09:37:27 compute-2 ceph-mon[5983]: pgmap v24: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 09 09:37:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:27 compute-2 sudo[17759]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:27.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:28 compute-2 sudo[18534]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hropgexatxmvnyqdkgrcpcclqrgjvhed ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002647.6594226-59-55180706651132/AnsiballZ_command.py'
Oct 09 09:37:28 compute-2 sudo[18534]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:37:28 compute-2 sudo[18538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:37:28 compute-2 sudo[18538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:28 compute-2 sudo[18538]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:28 compute-2 python3.9[18536]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:37:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:28 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:28 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:37:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:28 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec002d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:37:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:37:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:37:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:29.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:29 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:29.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:30 compute-2 ceph-mon[5983]: pgmap v25: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 09 09:37:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:30 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:30 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:31.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:31 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:37:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:31 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:37:31 compute-2 sudo[18585]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:37:31 compute-2 sudo[18585]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:31 compute-2 sudo[18585]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:31 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:31.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:32 compute-2 ceph-mon[5983]: pgmap v26: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Oct 09 09:37:32 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:32 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:32 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 09 09:37:32 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 09 09:37:32 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:32 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:32 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:32 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.lwqgfy", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 09 09:37:32 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 09:37:32 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:32 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:32 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:33 compute-2 ceph-mon[5983]: Reconfiguring mon.compute-0 (monmap changed)...
Oct 09 09:37:33 compute-2 ceph-mon[5983]: Reconfiguring daemon mon.compute-0 on compute-0
Oct 09 09:37:33 compute-2 ceph-mon[5983]: Reconfiguring mgr.compute-0.lwqgfy (monmap changed)...
Oct 09 09:37:33 compute-2 ceph-mon[5983]: Reconfiguring daemon mgr.compute-0.lwqgfy on compute-0
Oct 09 09:37:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 09 09:37:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct 09 09:37:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:33.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:33 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:33.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:34 compute-2 ceph-mon[5983]: Reconfiguring crash.compute-0 (monmap changed)...
Oct 09 09:37:34 compute-2 ceph-mon[5983]: Reconfiguring daemon crash.compute-0 on compute-0
Oct 09 09:37:34 compute-2 ceph-mon[5983]: pgmap v27: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Oct 09 09:37:34 compute-2 ceph-mon[5983]: Reconfiguring osd.1 (monmap changed)...
Oct 09 09:37:34 compute-2 ceph-mon[5983]: Reconfiguring daemon osd.1 on compute-0
Oct 09 09:37:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:34 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:34 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 09 09:37:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:34 compute-2 sudo[18534]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:34 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:34 compute-2 sshd-session[17785]: Connection closed by 192.168.122.30 port 60658
Oct 09 09:37:34 compute-2 sshd-session[17744]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:37:34 compute-2 systemd[1]: session-21.scope: Deactivated successfully.
Oct 09 09:37:34 compute-2 systemd[1]: session-21.scope: Consumed 6.914s CPU time.
Oct 09 09:37:34 compute-2 systemd-logind[800]: Session 21 logged out. Waiting for processes to exit.
Oct 09 09:37:34 compute-2 systemd-logind[800]: Removed session 21.
Oct 09 09:37:35 compute-2 ceph-mon[5983]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Oct 09 09:37:35 compute-2 ceph-mon[5983]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Oct 09 09:37:35 compute-2 ceph-mon[5983]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Oct 09 09:37:35 compute-2 ceph-mon[5983]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Oct 09 09:37:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:37:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:35.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:35 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000019s ======
Oct 09 09:37:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:35.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Oct 09 09:37:36 compute-2 ceph-mon[5983]: pgmap v28: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Oct 09 09:37:36 compute-2 ceph-mon[5983]: Reconfiguring grafana.compute-0 (dependencies changed)...
Oct 09 09:37:36 compute-2 ceph-mon[5983]: Reconfiguring daemon grafana.compute-0 on compute-0
Oct 09 09:37:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:36 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:36 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:37.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:37 compute-2 ceph-mon[5983]: Reconfiguring crash.compute-1 (monmap changed)...
Oct 09 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct 09 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:37 compute-2 ceph-mon[5983]: Reconfiguring daemon crash.compute-1 on compute-1
Oct 09 09:37:37 compute-2 ceph-mon[5983]: pgmap v29: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Oct 09 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:37 compute-2 ceph-mon[5983]: Reconfiguring osd.0 (monmap changed)...
Oct 09 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct 09 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:37 compute-2 ceph-mon[5983]: Reconfiguring daemon osd.0 on compute-1
Oct 09 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 09 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 09 09:37:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:37 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bf0000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:37 compute-2 sudo[18657]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:37:37 compute-2 sudo[18657]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:37 compute-2 sudo[18657]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:37 compute-2 sudo[18682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:37:37 compute-2 sudo[18682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:37 compute-2 podman[18720]: 2025-10-09 09:37:37.864053442 +0000 UTC m=+0.032967779 container create 7900d403d89713bfb946c9db6f2d7b7abd737b52e78bf4ebef4f93ec3a93ad3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 09 09:37:37 compute-2 systemd[1]: Started libpod-conmon-7900d403d89713bfb946c9db6f2d7b7abd737b52e78bf4ebef4f93ec3a93ad3a.scope.
Oct 09 09:37:37 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:37:37 compute-2 podman[18720]: 2025-10-09 09:37:37.920158284 +0000 UTC m=+0.089072631 container init 7900d403d89713bfb946c9db6f2d7b7abd737b52e78bf4ebef4f93ec3a93ad3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_leavitt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:37:37 compute-2 podman[18720]: 2025-10-09 09:37:37.926071942 +0000 UTC m=+0.094986280 container start 7900d403d89713bfb946c9db6f2d7b7abd737b52e78bf4ebef4f93ec3a93ad3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct 09 09:37:37 compute-2 podman[18720]: 2025-10-09 09:37:37.927387764 +0000 UTC m=+0.096302101 container attach 7900d403d89713bfb946c9db6f2d7b7abd737b52e78bf4ebef4f93ec3a93ad3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_leavitt, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:37:37 compute-2 charming_leavitt[18734]: 167 167
Oct 09 09:37:37 compute-2 systemd[1]: libpod-7900d403d89713bfb946c9db6f2d7b7abd737b52e78bf4ebef4f93ec3a93ad3a.scope: Deactivated successfully.
Oct 09 09:37:37 compute-2 podman[18720]: 2025-10-09 09:37:37.931540738 +0000 UTC m=+0.100455075 container died 7900d403d89713bfb946c9db6f2d7b7abd737b52e78bf4ebef4f93ec3a93ad3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_leavitt, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Oct 09 09:37:37 compute-2 systemd[1]: var-lib-containers-storage-overlay-aa25ae9fce5831dab22c2eded0dabdd34c56215cad572c4c4d539f266a5536ea-merged.mount: Deactivated successfully.
Oct 09 09:37:37 compute-2 podman[18720]: 2025-10-09 09:37:37.850498305 +0000 UTC m=+0.019412641 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:37:37 compute-2 podman[18720]: 2025-10-09 09:37:37.954522556 +0000 UTC m=+0.123436893 container remove 7900d403d89713bfb946c9db6f2d7b7abd737b52e78bf4ebef4f93ec3a93ad3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_leavitt, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 09 09:37:37 compute-2 systemd[1]: libpod-conmon-7900d403d89713bfb946c9db6f2d7b7abd737b52e78bf4ebef4f93ec3a93ad3a.scope: Deactivated successfully.
Oct 09 09:37:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:37.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:37 compute-2 sudo[18682]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:38 compute-2 sudo[18749]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:37:38 compute-2 sudo[18749]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:38 compute-2 sudo[18749]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:38 compute-2 sudo[18774]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 _orch deploy --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609
Oct 09 09:37:38 compute-2 sudo[18774]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:38 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:38 compute-2 podman[18813]: 2025-10-09 09:37:38.380076043 +0000 UTC m=+0.033320147 container create e1895075e0b745dfbda047fa7796cae08147fd6ced0b62b4c672645f5e2276a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Oct 09 09:37:38 compute-2 systemd[1]: Started libpod-conmon-e1895075e0b745dfbda047fa7796cae08147fd6ced0b62b4c672645f5e2276a2.scope.
Oct 09 09:37:38 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:37:38 compute-2 podman[18813]: 2025-10-09 09:37:38.435670557 +0000 UTC m=+0.088914682 container init e1895075e0b745dfbda047fa7796cae08147fd6ced0b62b4c672645f5e2276a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mcnulty, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Oct 09 09:37:38 compute-2 podman[18813]: 2025-10-09 09:37:38.440721051 +0000 UTC m=+0.093965156 container start e1895075e0b745dfbda047fa7796cae08147fd6ced0b62b4c672645f5e2276a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Oct 09 09:37:38 compute-2 podman[18813]: 2025-10-09 09:37:38.441888932 +0000 UTC m=+0.095133037 container attach e1895075e0b745dfbda047fa7796cae08147fd6ced0b62b4c672645f5e2276a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mcnulty, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:37:38 compute-2 charming_mcnulty[18826]: 167 167
Oct 09 09:37:38 compute-2 systemd[1]: libpod-e1895075e0b745dfbda047fa7796cae08147fd6ced0b62b4c672645f5e2276a2.scope: Deactivated successfully.
Oct 09 09:37:38 compute-2 podman[18813]: 2025-10-09 09:37:38.444653797 +0000 UTC m=+0.097897902 container died e1895075e0b745dfbda047fa7796cae08147fd6ced0b62b4c672645f5e2276a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mcnulty, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 09:37:38 compute-2 systemd[1]: var-lib-containers-storage-overlay-839e22ce41db17ec9436139cd35ba74a9a9ed8d4db44de8a2ce947b27f2ccd8d-merged.mount: Deactivated successfully.
Oct 09 09:37:38 compute-2 podman[18813]: 2025-10-09 09:37:38.366226547 +0000 UTC m=+0.019470672 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:37:38 compute-2 podman[18813]: 2025-10-09 09:37:38.466683673 +0000 UTC m=+0.119927779 container remove e1895075e0b745dfbda047fa7796cae08147fd6ced0b62b4c672645f5e2276a2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_mcnulty, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:37:38 compute-2 systemd[1]: libpod-conmon-e1895075e0b745dfbda047fa7796cae08147fd6ced0b62b4c672645f5e2276a2.scope: Deactivated successfully.
Oct 09 09:37:38 compute-2 sudo[18774]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:38 compute-2 ceph-mon[5983]: Reconfiguring mon.compute-1 (monmap changed)...
Oct 09 09:37:38 compute-2 ceph-mon[5983]: Reconfiguring daemon mon.compute-1 on compute-1
Oct 09 09:37:38 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:38 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:38 compute-2 ceph-mon[5983]: Reconfiguring mon.compute-2 (monmap changed)...
Oct 09 09:37:38 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct 09 09:37:38 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct 09 09:37:38 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:38 compute-2 ceph-mon[5983]: Reconfiguring daemon mon.compute-2 on compute-2
Oct 09 09:37:38 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:38 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:38 compute-2 ceph-mon[5983]: Reconfiguring mgr.compute-2.takdnm (monmap changed)...
Oct 09 09:37:38 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.takdnm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct 09 09:37:38 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 09:37:38 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:38 compute-2 ceph-mon[5983]: Reconfiguring daemon mgr.compute-2.takdnm on compute-2
Oct 09 09:37:38 compute-2 sudo[18841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:37:38 compute-2 sudo[18841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:38 compute-2 sudo[18841]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:38 compute-2 sudo[18866]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 09:37:38 compute-2 sudo[18866]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:38 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:39 compute-2 podman[18947]: 2025-10-09 09:37:39.127521398 +0000 UTC m=+0.041508391 container exec 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:37:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:39.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:39 compute-2 podman[18947]: 2025-10-09 09:37:39.222142784 +0000 UTC m=+0.136129758 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct 09 09:37:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:39 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:39 compute-2 ceph-mon[5983]: pgmap v30: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 09 09:37:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct 09 09:37:39 compute-2 ceph-mon[5983]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Oct 09 09:37:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct 09 09:37:39 compute-2 ceph-mon[5983]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Oct 09 09:37:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct 09 09:37:39 compute-2 ceph-mon[5983]: from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Oct 09 09:37:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:39 compute-2 podman[19028]: 2025-10-09 09:37:39.521919266 +0000 UTC m=+0.043390997 container exec 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:37:39 compute-2 podman[19028]: 2025-10-09 09:37:39.554232876 +0000 UTC m=+0.075704587 container exec_died 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:37:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:39 compute-2 podman[19127]: 2025-10-09 09:37:39.929325708 +0000 UTC m=+0.042786913 container exec 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct 09 09:37:39 compute-2 podman[19127]: 2025-10-09 09:37:39.943015382 +0000 UTC m=+0.056476577 container exec_died 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct 09 09:37:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:39.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:40 compute-2 podman[19178]: 2025-10-09 09:37:40.10992292 +0000 UTC m=+0.044212562 container exec a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, com.redhat.component=keepalived-container, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git)
Oct 09 09:37:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093740 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 09 09:37:40 compute-2 podman[19178]: 2025-10-09 09:37:40.118720559 +0000 UTC m=+0.053010181 container exec_died a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, vcs-type=git, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., version=2.2.4, release=1793, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, name=keepalived, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20)
Oct 09 09:37:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:40 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bf00021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:40 compute-2 podman[19220]: 2025-10-09 09:37:40.26313746 +0000 UTC m=+0.047549472 container exec 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct 09 09:37:40 compute-2 podman[19220]: 2025-10-09 09:37:40.272100923 +0000 UTC m=+0.056512935 container exec_died 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 09 09:37:40 compute-2 sudo[18866]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:40 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bf00021f0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:37:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:37:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:41.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:41 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:41.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:42 compute-2 ceph-mon[5983]: pgmap v31: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Oct 09 09:37:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:42 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec003fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:42 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec003fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:43.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:43 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec003fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #16. Immutable memtables: 0.
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.817194) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 16
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663817250, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1463, "num_deletes": 251, "total_data_size": 4339746, "memory_usage": 4409408, "flush_reason": "Manual Compaction"}
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #17: started
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663823667, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 17, "file_size": 2428875, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 5751, "largest_seqno": 7209, "table_properties": {"data_size": 2422768, "index_size": 3178, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14731, "raw_average_key_size": 20, "raw_value_size": 2409516, "raw_average_value_size": 3318, "num_data_blocks": 147, "num_entries": 726, "num_filter_entries": 726, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002626, "oldest_key_time": 1760002626, "file_creation_time": 1760002663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 17, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 6499 microseconds, and 4697 cpu microseconds.
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.823702) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #17: 2428875 bytes OK
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.823715) [db/memtable_list.cc:519] [default] Level-0 commit table #17 started
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.824065) [db/memtable_list.cc:722] [default] Level-0 commit table #17: memtable #1 done
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.824077) EVENT_LOG_v1 {"time_micros": 1760002663824073, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.824090) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 4332458, prev total WAL file size 4332458, number of live WAL files 2.
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000013.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.824867) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [17(2371KB)], [15(11MB)]
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663824932, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [17], "files_L6": [15], "score": -1, "input_data_size": 14754868, "oldest_snapshot_seqno": -1}
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #18: 2698 keys, 13382042 bytes, temperature: kUnknown
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663857427, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 18, "file_size": 13382042, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13359867, "index_size": 14322, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 6789, "raw_key_size": 68558, "raw_average_key_size": 25, "raw_value_size": 13305771, "raw_average_value_size": 4931, "num_data_blocks": 634, "num_entries": 2698, "num_filter_entries": 2698, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760002663, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 18, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.857604) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 13382042 bytes
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.857987) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 453.3 rd, 411.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 11.8 +0.0 blob) out(12.8 +0.0 blob), read-write-amplify(11.6) write-amplify(5.5) OK, records in: 3224, records dropped: 526 output_compression: NoCompression
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.858003) EVENT_LOG_v1 {"time_micros": 1760002663857995, "job": 6, "event": "compaction_finished", "compaction_time_micros": 32552, "compaction_time_cpu_micros": 19888, "output_level": 6, "num_output_files": 1, "total_output_size": 13382042, "num_input_records": 3224, "num_output_records": 2698, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000017.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663858460, "job": 6, "event": "table_file_deletion", "file_number": 17}
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000015.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002663859897, "job": 6, "event": "table_file_deletion", "file_number": 15}
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.824715) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.859983) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.859986) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.859987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.859988) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:37:43 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:37:43.859989) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:37:43 compute-2 sudo[19282]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:37:43 compute-2 sudo[19282]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:43 compute-2 sudo[19282]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:43.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:44 compute-2 ceph-mon[5983]: pgmap v32: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Oct 09 09:37:44 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:44 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:37:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:44 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:44 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bf8001080 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:45.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:45 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec003fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:45.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:46 compute-2 ceph-mon[5983]: pgmap v33: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 09 09:37:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:46 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec003fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:46 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0be4004630 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.002000037s ======
Oct 09 09:37:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:47.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000037s
Oct 09 09:37:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:47 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bf8001bc0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:47.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:48 compute-2 ceph-mon[5983]: pgmap v34: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 09 09:37:48 compute-2 sudo[19312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:37:48 compute-2 sudo[19312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:37:48 compute-2 sudo[19312]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:48 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec003fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:37:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[16670]: 09/10/2025 09:37:48 : epoch 68e7823c : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0bec003fa0 fd 37 proxy ignored for local
Oct 09 09:37:48 compute-2 kernel: ganesha.nfsd[17190]: segfault at 50 ip 00007f0c9e66a32e sp 00007f0c56ffc210 error 4 in libntirpc.so.5.8[7f0c9e64f000+2c000] likely on CPU 2 (core 0, socket 2)
Oct 09 09:37:48 compute-2 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 09 09:37:48 compute-2 systemd[1]: Created slice Slice /system/systemd-coredump.
Oct 09 09:37:48 compute-2 systemd[1]: Started Process Core Dump (PID 19337/UID 0).
Oct 09 09:37:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:49.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:49 compute-2 systemd-coredump[19338]: Process 16674 (ganesha.nfsd) of user 0 dumped core.
                                                   
                                                   Stack trace of thread 55:
                                                   #0  0x00007f0c9e66a32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                   ELF object binary architecture: AMD x86-64
Oct 09 09:37:49 compute-2 systemd[1]: systemd-coredump@0-19337-0.service: Deactivated successfully.
Oct 09 09:37:49 compute-2 podman[19347]: 2025-10-09 09:37:49.913263901 +0000 UTC m=+0.022600869 container died 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Oct 09 09:37:49 compute-2 systemd[1]: var-lib-containers-storage-overlay-0a6547798e8d0d0c4a1a36d3bb36bd013818446bbe57fa85f789913a590475d7-merged.mount: Deactivated successfully.
Oct 09 09:37:49 compute-2 podman[19347]: 2025-10-09 09:37:49.932730062 +0000 UTC m=+0.042067040 container remove 7d26b56515fe23482e24a0babb35ec7b09736c577492b12f7ff7cbb5a0673212 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct 09 09:37:49 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Main process exited, code=exited, status=139/n/a
Oct 09 09:37:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000019s ======
Oct 09 09:37:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:49.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Oct 09 09:37:50 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Failed with result 'exit-code'.
Oct 09 09:37:50 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Consumed 1.144s CPU time.
Oct 09 09:37:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e44 e44: 3 total, 3 up, 3 in
Oct 09 09:37:50 compute-2 ceph-mon[5983]: pgmap v35: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Oct 09 09:37:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 09:37:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:37:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093750 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:37:50 compute-2 sshd-session[19381]: Accepted publickey for zuul from 192.168.122.30 port 51912 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:37:50 compute-2 systemd-logind[800]: New session 22 of user zuul.
Oct 09 09:37:50 compute-2 systemd[1]: Started Session 22 of User zuul.
Oct 09 09:37:50 compute-2 sshd-session[19381]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:37:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:51 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e45 e45: 3 total, 3 up, 3 in
Oct 09 09:37:51 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct 09 09:37:51 compute-2 ceph-mon[5983]: osdmap e44: 3 total, 3 up, 3 in
Oct 09 09:37:51 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 09:37:51 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 09:37:51 compute-2 python3.9[19534]: ansible-ansible.legacy.ping Invoked with data=pong
Oct 09 09:37:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:51.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 45 pg[3.0( empty local-lis/les=34/35 n=0 ec=11/11 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=12.595129967s) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active pruub 117.914047241s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:37:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 45 pg[3.0( empty local-lis/les=34/35 n=0 ec=11/11 lis/c=34/34 les/c/f=35/35/0 sis=45 pruub=12.595129967s) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown pruub 117.914047241s@ mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:52.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:52 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e46 e46: 3 total, 3 up, 3 in
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1f( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1b( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1e( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.18( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.17( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.16( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.15( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.10( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.f( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.c( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1a( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.14( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.13( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.a( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.19( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.e( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.4( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.11( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.6( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1c( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1d( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.2( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.b( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.d( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.12( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.3( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.5( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.7( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.8( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.9( empty local-lis/les=34/35 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1e( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.16( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.18( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.15( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1b( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.17( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.14( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.f( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.0( empty local-lis/les=45/46 n=0 ec=11/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.13( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.c( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.e( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1a( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.19( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1f( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-mon[5983]: pgmap v37: 43 pgs: 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 B/s wr, 0 op/s
Oct 09 09:37:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct 09 09:37:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 09:37:52 compute-2 ceph-mon[5983]: osdmap e45: 3 total, 3 up, 3 in
Oct 09 09:37:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 09:37:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct 09 09:37:52 compute-2 ceph-mon[5983]: osdmap e46: 3 total, 3 up, 3 in
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.4( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.10( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.11( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.a( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.6( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.2( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.b( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.d( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.12( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.3( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.5( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1c( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.1d( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.8( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.7( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 46 pg[3.9( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=34/34 les/c/f=35/35/0 sis=45) [2] r=0 lpr=45 pi=[34,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:52 compute-2 python3.9[19709]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:37:52 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct 09 09:37:52 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct 09 09:37:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:53 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e47 e47: 3 total, 3 up, 3 in
Oct 09 09:37:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct 09 09:37:53 compute-2 ceph-mon[5983]: 3.1e scrub starts
Oct 09 09:37:53 compute-2 ceph-mon[5983]: 3.1e scrub ok
Oct 09 09:37:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 09:37:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 09:37:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct 09 09:37:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 09:37:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 09:37:53 compute-2 ceph-mon[5983]: osdmap e47: 3 total, 3 up, 3 in
Oct 09 09:37:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 09:37:53 compute-2 sudo[19864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqmnjyfynexiseelmlmrafjrarewqvzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002672.7343867-95-16719155501204/AnsiballZ_command.py'
Oct 09 09:37:53 compute-2 sudo[19864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:37:53 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.16 deep-scrub starts
Oct 09 09:37:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:53.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:53 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.16 deep-scrub ok
Oct 09 09:37:53 compute-2 python3.9[19866]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:37:53 compute-2 sudo[19864]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:53 compute-2 sudo[20018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdkpqpzznbppseqdekbtveglciqvunbv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002673.6219795-132-257717816342346/AnsiballZ_stat.py'
Oct 09 09:37:53 compute-2 sudo[20018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:37:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:54.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e48 e48: 3 total, 3 up, 3 in
Oct 09 09:37:54 compute-2 python3.9[20020]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:37:54 compute-2 ceph-mon[5983]: pgmap v40: 74 pgs: 31 unknown, 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:37:54 compute-2 ceph-mon[5983]: 3.16 deep-scrub starts
Oct 09 09:37:54 compute-2 ceph-mon[5983]: 3.16 deep-scrub ok
Oct 09 09:37:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct 09 09:37:54 compute-2 ceph-mon[5983]: osdmap e48: 3 total, 3 up, 3 in
Oct 09 09:37:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 09:37:54 compute-2 sudo[20018]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:54 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct 09 09:37:54 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 47 pg[5.0( empty local-lis/les=34/35 n=0 ec=13/13 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.346458435s) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active pruub 117.914161682s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.0( empty local-lis/les=34/35 n=0 ec=13/13 lis/c=34/34 les/c/f=35/35/0 sis=47 pruub=10.346458435s) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown pruub 117.914161682s@ mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.1( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.2( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.3( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.4( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.5( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.6( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.7( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.8( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.9( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.a( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.b( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.c( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.d( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.e( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.f( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.10( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.11( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.12( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.13( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.14( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.15( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.16( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.17( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.18( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.19( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.1a( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.1b( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.1c( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.1d( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.1e( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 48 pg[5.1f( empty local-lis/les=34/35 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:37:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:54 compute-2 sudo[20173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jemxfdwxyndquxzqywbpgcochmacptjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002674.4003325-165-223760355663230/AnsiballZ_file.py'
Oct 09 09:37:54 compute-2 sudo[20173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:37:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093754 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:37:54 compute-2 python3.9[20175]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:37:54 compute-2 sudo[20173]: pam_unix(sudo:session): session closed for user root
Oct 09 09:37:55 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e49 e49: 3 total, 3 up, 3 in
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.e( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.f( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.14( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.5( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.1( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.d( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.b( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.4( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.1b( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.7( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.1a( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.17( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.3( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.0( empty local-lis/les=47/49 n=0 ec=13/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.2( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.1f( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.8( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.c( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.15( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.6( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.12( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.1c( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.a( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.9( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.13( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.10( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.11( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.16( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.1e( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.18( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.1d( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 49 pg[5.19( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=34/34 les/c/f=35/35/0 sis=47) [2] r=0 lpr=47 pi=[34,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:37:55 compute-2 ceph-mon[5983]: 3.18 scrub starts
Oct 09 09:37:55 compute-2 ceph-mon[5983]: 3.18 scrub ok
Oct 09 09:37:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 09:37:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct 09 09:37:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Oct 09 09:37:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 09:37:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct 09 09:37:55 compute-2 ceph-mon[5983]: osdmap e49: 3 total, 3 up, 3 in
Oct 09 09:37:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 09:37:55 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Oct 09 09:37:55 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Oct 09 09:37:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:55.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:55 compute-2 python3.9[20325]: ansible-ansible.builtin.service_facts Invoked
Oct 09 09:37:55 compute-2 network[20343]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 09 09:37:55 compute-2 network[20344]: 'network-scripts' will be removed from distribution in near future.
Oct 09 09:37:55 compute-2 network[20345]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 09 09:37:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000018s ======
Oct 09 09:37:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:56.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Oct 09 09:37:56 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e50 e50: 3 total, 3 up, 3 in
Oct 09 09:37:56 compute-2 ceph-mon[5983]: pgmap v43: 136 pgs: 93 unknown, 43 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:37:56 compute-2 ceph-mon[5983]: 4.1a scrub starts
Oct 09 09:37:56 compute-2 ceph-mon[5983]: 4.1a scrub ok
Oct 09 09:37:56 compute-2 ceph-mon[5983]: 3.1b scrub starts
Oct 09 09:37:56 compute-2 ceph-mon[5983]: 3.1b scrub ok
Oct 09 09:37:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct 09 09:37:56 compute-2 ceph-mon[5983]: osdmap e50: 3 total, 3 up, 3 in
Oct 09 09:37:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 09:37:56 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.17 deep-scrub starts
Oct 09 09:37:56 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.17 deep-scrub ok
Oct 09 09:37:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:57 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e51 e51: 3 total, 3 up, 3 in
Oct 09 09:37:57 compute-2 ceph-mon[5983]: 4.1d deep-scrub starts
Oct 09 09:37:57 compute-2 ceph-mon[5983]: 4.1d deep-scrub ok
Oct 09 09:37:57 compute-2 ceph-mon[5983]: 3.17 deep-scrub starts
Oct 09 09:37:57 compute-2 ceph-mon[5983]: 3.17 deep-scrub ok
Oct 09 09:37:57 compute-2 ceph-mon[5983]: 7.1c deep-scrub starts
Oct 09 09:37:57 compute-2 ceph-mon[5983]: 7.1c deep-scrub ok
Oct 09 09:37:57 compute-2 ceph-mon[5983]: pgmap v46: 182 pgs: 2 peering, 46 unknown, 134 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Oct 09 09:37:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 09:37:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 09:37:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct 09 09:37:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 09:37:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 09:37:57 compute-2 ceph-mon[5983]: osdmap e51: 3 total, 3 up, 3 in
Oct 09 09:37:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 09:37:57 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.14 deep-scrub starts
Oct 09 09:37:57 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.14 deep-scrub ok
Oct 09 09:37:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:57.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:37:58.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:58 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Oct 09 09:37:58 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e52 e52: 3 total, 3 up, 3 in
Oct 09 09:37:58 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Oct 09 09:37:58 compute-2 ceph-mon[5983]: 4.1b deep-scrub starts
Oct 09 09:37:58 compute-2 ceph-mon[5983]: 4.1b deep-scrub ok
Oct 09 09:37:58 compute-2 ceph-mon[5983]: 3.14 deep-scrub starts
Oct 09 09:37:58 compute-2 ceph-mon[5983]: 3.14 deep-scrub ok
Oct 09 09:37:58 compute-2 ceph-mon[5983]: 7.1b scrub starts
Oct 09 09:37:58 compute-2 ceph-mon[5983]: 7.1b scrub ok
Oct 09 09:37:58 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct 09 09:37:58 compute-2 ceph-mon[5983]: osdmap e52: 3 total, 3 up, 3 in
Oct 09 09:37:58 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct 09 09:37:58 compute-2 python3.9[20610]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:37:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:58 compute-2 python3.9[20761]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:37:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e53 e53: 3 total, 3 up, 3 in
Oct 09 09:37:59 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.f scrub starts
Oct 09 09:37:59 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.f scrub ok
Oct 09 09:37:59 compute-2 ceph-mon[5983]: 4.17 scrub starts
Oct 09 09:37:59 compute-2 ceph-mon[5983]: 4.17 scrub ok
Oct 09 09:37:59 compute-2 ceph-mon[5983]: 3.13 scrub starts
Oct 09 09:37:59 compute-2 ceph-mon[5983]: 3.13 scrub ok
Oct 09 09:37:59 compute-2 ceph-mon[5983]: 7.1a scrub starts
Oct 09 09:37:59 compute-2 ceph-mon[5983]: 7.1a scrub ok
Oct 09 09:37:59 compute-2 ceph-mon[5983]: pgmap v49: 244 pgs: 2 peering, 108 unknown, 134 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 255 B/s wr, 1 op/s
Oct 09 09:37:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 09:37:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 09:37:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct 09 09:37:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 09:37:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 09:37:59 compute-2 ceph-mon[5983]: osdmap e53: 3 total, 3 up, 3 in
Oct 09 09:37:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:37:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:37:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:37:59.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:37:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:37:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:37:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:37:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:37:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:00.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:00 compute-2 python3.9[20916]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:38:00 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e54 e54: 3 total, 3 up, 3 in
Oct 09 09:38:00 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.0 deep-scrub starts
Oct 09 09:38:00 compute-2 ceph-mon[5983]: 4.16 scrub starts
Oct 09 09:38:00 compute-2 ceph-mon[5983]: 4.16 scrub ok
Oct 09 09:38:00 compute-2 ceph-mon[5983]: 3.f scrub starts
Oct 09 09:38:00 compute-2 ceph-mon[5983]: 3.f scrub ok
Oct 09 09:38:00 compute-2 ceph-mon[5983]: 7.18 scrub starts
Oct 09 09:38:00 compute-2 ceph-mon[5983]: 7.18 scrub ok
Oct 09 09:38:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:38:00 compute-2 ceph-mon[5983]: osdmap e54: 3 total, 3 up, 3 in
Oct 09 09:38:00 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Scheduled restart job, restart counter is at 1.
Oct 09 09:38:00 compute-2 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:38:00 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Consumed 1.144s CPU time.
Oct 09 09:38:00 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.0 deep-scrub ok
Oct 09 09:38:00 compute-2 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:38:00 compute-2 podman[20987]: 2025-10-09 09:38:00.318891622 +0000 UTC m=+0.026861276 container create 7d797a2017b6fe8f4902310e3ed689ee7a3fd50ce65321ab5df44571f3fcb1ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Oct 09 09:38:00 compute-2 systemd[1270]: Created slice User Background Tasks Slice.
Oct 09 09:38:00 compute-2 systemd[9018]: Starting Mark boot as successful...
Oct 09 09:38:00 compute-2 systemd[1270]: Starting Cleanup of User's Temporary Files and Directories...
Oct 09 09:38:00 compute-2 systemd[9018]: Finished Mark boot as successful.
Oct 09 09:38:00 compute-2 systemd[1270]: Finished Cleanup of User's Temporary Files and Directories.
Oct 09 09:38:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90301e70d30233a90751c53fcdc9e2ec380f93735b19e9226a9082fabe201d4c/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 09 09:38:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90301e70d30233a90751c53fcdc9e2ec380f93735b19e9226a9082fabe201d4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:38:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90301e70d30233a90751c53fcdc9e2ec380f93735b19e9226a9082fabe201d4c/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:38:00 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90301e70d30233a90751c53fcdc9e2ec380f93735b19e9226a9082fabe201d4c/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-2.cpioam-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:38:00 compute-2 podman[20987]: 2025-10-09 09:38:00.359647497 +0000 UTC m=+0.067617152 container init 7d797a2017b6fe8f4902310e3ed689ee7a3fd50ce65321ab5df44571f3fcb1ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:38:00 compute-2 podman[20987]: 2025-10-09 09:38:00.364087837 +0000 UTC m=+0.072057490 container start 7d797a2017b6fe8f4902310e3ed689ee7a3fd50ce65321ab5df44571f3fcb1ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:38:00 compute-2 bash[20987]: 7d797a2017b6fe8f4902310e3ed689ee7a3fd50ce65321ab5df44571f3fcb1ac
Oct 09 09:38:00 compute-2 podman[20987]: 2025-10-09 09:38:00.307808074 +0000 UTC m=+0.015777748 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:38:00 compute-2 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:38:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 09 09:38:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 09 09:38:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 09 09:38:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 09 09:38:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 09 09:38:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 09 09:38:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 09 09:38:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:38:00 compute-2 sudo[21168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqvvcvqdhigngwuppoydejdkvxzaqjsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002680.4136362-309-277836742662114/AnsiballZ_setup.py'
Oct 09 09:38:00 compute-2 sudo[21168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:38:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:00 compute-2 python3.9[21170]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:38:01 compute-2 sudo[21168]: pam_unix(sudo:session): session closed for user root
Oct 09 09:38:01 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.e scrub starts
Oct 09 09:38:01 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.e scrub ok
Oct 09 09:38:01 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e55 e55: 3 total, 3 up, 3 in
Oct 09 09:38:01 compute-2 ceph-mon[5983]: 4.14 scrub starts
Oct 09 09:38:01 compute-2 ceph-mon[5983]: 4.14 scrub ok
Oct 09 09:38:01 compute-2 ceph-mon[5983]: 7.19 scrub starts
Oct 09 09:38:01 compute-2 ceph-mon[5983]: 7.19 scrub ok
Oct 09 09:38:01 compute-2 ceph-mon[5983]: 3.0 deep-scrub starts
Oct 09 09:38:01 compute-2 ceph-mon[5983]: 3.0 deep-scrub ok
Oct 09 09:38:01 compute-2 ceph-mon[5983]: pgmap v52: 306 pgs: 2 peering, 170 unknown, 134 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:38:01 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct 09 09:38:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000019s ======
Oct 09 09:38:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:01.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Oct 09 09:38:01 compute-2 sudo[21252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvtrnwlnuscnzordcmnlufcndgcgtvzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002680.4136362-309-277836742662114/AnsiballZ_dnf.py'
Oct 09 09:38:01 compute-2 sudo[21252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:38:01 compute-2 python3.9[21254]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:38:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000019s ======
Oct 09 09:38:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:02.011 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Oct 09 09:38:02 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Oct 09 09:38:02 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Oct 09 09:38:02 compute-2 ceph-mon[5983]: 4.13 scrub starts
Oct 09 09:38:02 compute-2 ceph-mon[5983]: 4.13 scrub ok
Oct 09 09:38:02 compute-2 ceph-mon[5983]: 3.e scrub starts
Oct 09 09:38:02 compute-2 ceph-mon[5983]: 3.e scrub ok
Oct 09 09:38:02 compute-2 ceph-mon[5983]: 7.10 deep-scrub starts
Oct 09 09:38:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct 09 09:38:02 compute-2 ceph-mon[5983]: osdmap e55: 3 total, 3 up, 3 in
Oct 09 09:38:02 compute-2 ceph-mon[5983]: 7.10 deep-scrub ok
Oct 09 09:38:02 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e56 e56: 3 total, 3 up, 3 in
Oct 09 09:38:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:03 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.c scrub starts
Oct 09 09:38:03 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.c scrub ok
Oct 09 09:38:03 compute-2 ceph-mon[5983]: 4.12 scrub starts
Oct 09 09:38:03 compute-2 ceph-mon[5983]: 4.12 scrub ok
Oct 09 09:38:03 compute-2 ceph-mon[5983]: 3.1a scrub starts
Oct 09 09:38:03 compute-2 ceph-mon[5983]: 3.1a scrub ok
Oct 09 09:38:03 compute-2 ceph-mon[5983]: 7.16 deep-scrub starts
Oct 09 09:38:03 compute-2 ceph-mon[5983]: osdmap e56: 3 total, 3 up, 3 in
Oct 09 09:38:03 compute-2 ceph-mon[5983]: 7.16 deep-scrub ok
Oct 09 09:38:03 compute-2 ceph-mon[5983]: pgmap v55: 337 pgs: 31 unknown, 32 peering, 274 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1.2 KiB/s wr, 3 op/s
Oct 09 09:38:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:03.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:04.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:04 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Oct 09 09:38:04 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Oct 09 09:38:04 compute-2 ceph-mon[5983]: 4.11 scrub starts
Oct 09 09:38:04 compute-2 ceph-mon[5983]: 4.11 scrub ok
Oct 09 09:38:04 compute-2 ceph-mon[5983]: 3.c scrub starts
Oct 09 09:38:04 compute-2 ceph-mon[5983]: 3.c scrub ok
Oct 09 09:38:04 compute-2 ceph-mon[5983]: 7.17 scrub starts
Oct 09 09:38:04 compute-2 ceph-mon[5983]: 7.17 scrub ok
Oct 09 09:38:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e56 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Oct 09 09:38:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Oct 09 09:38:05 compute-2 ceph-mon[5983]: 4.f scrub starts
Oct 09 09:38:05 compute-2 ceph-mon[5983]: 4.f scrub ok
Oct 09 09:38:05 compute-2 ceph-mon[5983]: 3.15 scrub starts
Oct 09 09:38:05 compute-2 ceph-mon[5983]: 3.15 scrub ok
Oct 09 09:38:05 compute-2 ceph-mon[5983]: 7.f scrub starts
Oct 09 09:38:05 compute-2 ceph-mon[5983]: 7.f scrub ok
Oct 09 09:38:05 compute-2 ceph-mon[5983]: pgmap v56: 337 pgs: 31 unknown, 32 peering, 274 active+clean; 457 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 985 B/s wr, 2 op/s
Oct 09 09:38:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:38:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:05.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:06.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:06 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct 09 09:38:06 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct 09 09:38:06 compute-2 ceph-mon[5983]: 4.c scrub starts
Oct 09 09:38:06 compute-2 ceph-mon[5983]: 4.c scrub ok
Oct 09 09:38:06 compute-2 ceph-mon[5983]: 3.19 scrub starts
Oct 09 09:38:06 compute-2 ceph-mon[5983]: 3.19 scrub ok
Oct 09 09:38:06 compute-2 ceph-mon[5983]: 7.c scrub starts
Oct 09 09:38:06 compute-2 ceph-mon[5983]: 7.c scrub ok
Oct 09 09:38:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:06 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Oct 09 09:38:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:06 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Oct 09 09:38:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:06 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:38:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:06 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:38:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:06 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:38:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:07 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Oct 09 09:38:07 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Oct 09 09:38:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:07.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:07 compute-2 ceph-mon[5983]: 4.0 deep-scrub starts
Oct 09 09:38:07 compute-2 ceph-mon[5983]: 4.0 deep-scrub ok
Oct 09 09:38:07 compute-2 ceph-mon[5983]: 3.1f scrub starts
Oct 09 09:38:07 compute-2 ceph-mon[5983]: 3.1f scrub ok
Oct 09 09:38:07 compute-2 ceph-mon[5983]: 7.1e scrub starts
Oct 09 09:38:07 compute-2 ceph-mon[5983]: 7.1e scrub ok
Oct 09 09:38:07 compute-2 ceph-mon[5983]: pgmap v57: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.6 KiB/s wr, 4 op/s
Oct 09 09:38:07 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 09:38:07 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 09:38:07 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 09:38:07 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 09:38:07 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 09 09:38:07 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 09:38:07 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct 09 09:38:07 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 09:38:07 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 09:38:07 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 09:38:07 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e57 e57: 3 total, 3 up, 3 in
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[7.1f( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.18( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.1a( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[7.11( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[7.14( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.3( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.5( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.9( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.3( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[7.a( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.7( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.9( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[7.5( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.4( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.2( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.1d( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.1( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.7( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.f( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.867207527s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.440948486s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.13( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.f( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.867181778s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.440948486s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.7( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.857445717s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431503296s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.866744995s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.440979004s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.7( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.857426643s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431503296s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.866710663s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.440979004s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.5( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.856984138s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431427002s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.3( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.866912842s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441360474s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.3( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.866850853s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441360474s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.5( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.856917381s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431427002s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.3( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.856478691s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431427002s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.5( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.866047859s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441009521s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.5( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.866032600s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441009521s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.14( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.865854263s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.440994263s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.14( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.865835190s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.440994263s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.3( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.856459618s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431427002s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.b( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855997086s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431411743s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.2( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855927467s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431396484s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.b( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855923653s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431411743s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.2( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855903625s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431396484s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.1( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855646133s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431381226s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.7( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.865549088s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441314697s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.1( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855628014s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431381226s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.7( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.865533829s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441314697s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.1c( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855484009s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431427002s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.1c( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855466843s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431427002s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.17( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.865333557s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441345215s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.17( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.865318298s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441345215s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.6( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855165482s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431381226s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.6( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855087280s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431381226s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.4( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855023384s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431350708s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.4( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.855003357s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431350708s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1b( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.864980698s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441299438s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.12( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.854892731s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431411743s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.2( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.864933968s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441482544s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.2( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.864919662s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441482544s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1f( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.864726067s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441452026s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1b( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.864631653s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441299438s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1f( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.864713669s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441452026s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.19( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.852690697s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.429489136s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.19( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.852668762s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.429489136s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.12( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.854758263s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431411743s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.a( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.854465485s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431365967s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.a( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.854315758s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431365967s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.c( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.864445686s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441574097s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.13( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.852210045s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.429473877s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.c( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.864321709s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441574097s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.13( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.852190018s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.429473877s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.6( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.864096642s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441574097s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.6( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.864078522s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441574097s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.d( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.854456902s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431411743s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.d( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.853819847s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431411743s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.14( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.851585388s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.429458618s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.15( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.863708496s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441543579s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.14( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.851563454s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.429458618s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1c( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.863759995s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441650391s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.c( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.851435661s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.429473877s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.c( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.851415634s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.429473877s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.a( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.863520622s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441650391s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.a( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.863506317s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441650391s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.f( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.851209641s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.429458618s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.f( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.851190567s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.429458618s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.9( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.863346100s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441741943s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.9( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.863320351s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441741943s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.16( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.863275528s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441772461s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.16( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.863257408s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441772461s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.10( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.852544785s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.431350708s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.10( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.852529526s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.431350708s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.15( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.862961769s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441543579s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1c( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.863590240s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441650391s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.16( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.850242615s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.429428101s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.16( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.850221634s) [0] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.429428101s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.17( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.850105286s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.429458618s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.17( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.850074768s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.429458618s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.11( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.862318993s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441741943s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.10( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.862718582s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441726685s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.11( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.862301826s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441741943s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.10( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.862284660s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441726685s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1e( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.862169266s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441818237s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1e( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.862150192s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441818237s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.18( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.849740982s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.429428101s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.18( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.849614143s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.429428101s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1d( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.861989021s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441894531s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.1d( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.861925125s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441894531s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.1e( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.849368095s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.429428101s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.18( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.861711502s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441833496s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.18( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.861655235s) [0] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441833496s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.1e( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.849140167s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.429428101s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.19( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.861316681s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active pruub 132.441940308s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[5.19( empty local-lis/les=47/49 n=0 ec=47/13 lis/c=47/47 les/c/f=49/49/0 sis=57 pruub=11.861275673s) [1] r=-1 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 132.441940308s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[7.16( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.1f( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.848836899s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 active pruub 129.429489136s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[3.1f( empty local-lis/les=45/46 n=0 ec=45/11 lis/c=45/45 les/c/f=46/46/0 sis=57 pruub=8.848590851s) [1] r=-1 lpr=57 pi=[45,57)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 129.429489136s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.1d( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.15( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.11( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.11( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[11.13( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.17( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.13( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.1e( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.16( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.1c( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[12.17( empty local-lis/les=0/0 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[7.1d( empty local-lis/les=0/0 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.16( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[10.11( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.17( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.15( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.14( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.18( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.1f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[11.19( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[11.3( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.2( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[6.d( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[11.17( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.3( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.15( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.19( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.9( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[11.16( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[6.5( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.c( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[11.8( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.8( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.5( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[6.f( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.b( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.1f( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[11.e( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[6.3( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.1( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[6.1( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.d( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[6.7( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.9( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.8( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.6( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.b( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[6.b( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.9( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[11.a( empty local-lis/les=0/0 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[4.3( empty local-lis/les=0/0 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.6( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[6.9( empty local-lis/les=0/0 n=0 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.f( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.7( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.5( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.a( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.3( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[9.1d( empty local-lis/les=0/0 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.2( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 57 pg[8.1c( empty local-lis/les=0/0 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:08.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:08 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.9 deep-scrub starts
Oct 09 09:38:08 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.9 deep-scrub ok
Oct 09 09:38:08 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e58 e58: 3 total, 3 up, 3 in
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.11( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.11( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.13( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.17( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.17( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.19( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.5( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.5( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.1d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.9( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.3( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.3( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.9( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.7( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.7( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.d( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.15( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-2 ceph-mon[5983]: 4.18 scrub starts
Oct 09 09:38:08 compute-2 ceph-mon[5983]: 4.18 scrub ok
Oct 09 09:38:08 compute-2 ceph-mon[5983]: 3.4 scrub starts
Oct 09 09:38:08 compute-2 ceph-mon[5983]: 3.4 scrub ok
Oct 09 09:38:08 compute-2 ceph-mon[5983]: 7.a deep-scrub starts
Oct 09 09:38:08 compute-2 ceph-mon[5983]: 7.a deep-scrub ok
Oct 09 09:38:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 09:38:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 09:38:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 09:38:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 09:38:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 09 09:38:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 09:38:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct 09 09:38:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 09:38:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 09:38:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 09:38:08 compute-2 ceph-mon[5983]: osdmap e57: 3 total, 3 up, 3 in
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.15( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.1b( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.1( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[10.1( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=58) [2]/[0] r=-1 lpr=58 pi=[53,58)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[11.17( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[7.1d( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.17( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[11.16( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.19( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.15( v 50'68 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[11.13( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.1c( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.1f( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.1c( v 50'68 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.1d( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.18( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.1a( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[7.14( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[7.11( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.11( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.8( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.3( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.5( v 33'9 (0'0,33'9] local-lis/les=57/58 n=1 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[7.1f( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.1d( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[11.8( v 40'96 (0'0,40'96] local-lis/les=57/58 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.b( v 50'68 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.14( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.11( v 50'68 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.5( v 50'68 (0'0,50'68] local-lis/les=57/58 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.13( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[6.b( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[7.a( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[6.5( v 41'42 lc 35'6 (0'0,41'42] local-lis/les=57/58 n=2 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[6.f( v 41'42 lc 35'1 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.f( v 50'68 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.3( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[6.1( v 41'42 (0'0,41'42] local-lis/les=57/58 n=2 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.9( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.d( v 50'68 lc 43'18 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.9( v 40'2 lc 0'0 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.1( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[6.3( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=57/58 n=2 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[11.19( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.1e( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.16( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.13( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[11.e( v 56'99 lc 40'80 (0'0,56'99] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=56'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.a( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.6( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[7.5( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.1f( v 50'68 lc 0'0 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.16( v 50'68 lc 43'35 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[11.a( v 40'96 (0'0,40'96] local-lis/les=57/58 n=0 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=40'96 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.b( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.8( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.17( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.9( v 50'68 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[11.3( v 56'99 lc 40'84 (0'0,56'99] local-lis/les=57/58 n=1 ec=53/36 lis/c=53/53 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[53,57)/1 crt=56'99 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.4( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.6( v 56'69 lc 43'49 (0'0,56'69] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=56'69 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.7( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[6.7( v 41'42 lc 35'11 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.15( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.2( v 40'2 (0'0,40'2] local-lis/les=57/58 n=1 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.18( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[4.2( empty local-lis/les=57/58 n=0 ec=47/12 lis/c=47/47 les/c/f=48/48/0 sis=57) [2] r=0 lpr=57 pi=[47,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.1d( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.c( v 50'68 (0'0,50'68] local-lis/les=57/58 n=0 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.2( v 50'68 (0'0,50'68] local-lis/les=57/58 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[6.d( v 41'42 lc 35'7 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.3( v 33'9 (0'0,33'9] local-lis/les=57/58 n=1 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[9.9( v 33'9 (0'0,33'9] local-lis/les=57/58 n=0 ec=51/32 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=33'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[7.16( empty local-lis/les=57/58 n=0 ec=49/15 lis/c=49/49 les/c/f=50/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[12.7( v 40'2 (0'0,40'2] local-lis/les=57/58 n=0 ec=55/38 lis/c=55/55 les/c/f=56/56/0 sis=57) [2] r=0 lpr=57 pi=[55,57)/1 crt=40'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 58 pg[8.3( v 50'68 (0'0,50'68] local-lis/les=57/58 n=1 ec=51/29 lis/c=51/51 les/c/f=54/54/0 sis=57) [2] r=0 lpr=57 pi=[51,57)/1 crt=50'68 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:08 compute-2 sudo[21312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:38:08 compute-2 sudo[21312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:38:08 compute-2 sudo[21312]: pam_unix(sudo:session): session closed for user root
Oct 09 09:38:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000018s ======
Oct 09 09:38:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:09.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Oct 09 09:38:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e59 e59: 3 total, 3 up, 3 in
Oct 09 09:38:09 compute-2 ceph-mon[5983]: 11.11 scrub starts
Oct 09 09:38:09 compute-2 ceph-mon[5983]: 11.11 scrub ok
Oct 09 09:38:09 compute-2 ceph-mon[5983]: 3.9 deep-scrub starts
Oct 09 09:38:09 compute-2 ceph-mon[5983]: 3.9 deep-scrub ok
Oct 09 09:38:09 compute-2 ceph-mon[5983]: 10.12 deep-scrub starts
Oct 09 09:38:09 compute-2 ceph-mon[5983]: 10.12 deep-scrub ok
Oct 09 09:38:09 compute-2 ceph-mon[5983]: osdmap e58: 3 total, 3 up, 3 in
Oct 09 09:38:09 compute-2 ceph-mon[5983]: pgmap v60: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 836 B/s wr, 2 op/s
Oct 09 09:38:09 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 09 09:38:09 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct 09 09:38:09 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 09 09:38:09 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct 09 09:38:09 compute-2 ceph-mon[5983]: osdmap e59: 3 total, 3 up, 3 in
Oct 09 09:38:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 59 pg[6.f( v 41'42 lc 35'25 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/49 les/c/f=58/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 mlcod 0'0 active+recovering rops=1 m=1 mbc={255={(0+1)=1}}] scrubber<NotActive>: update_scrub_job !!! primary but not scheduled! 
Oct 09 09:38:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 59 pg[6.5( v 41'42 lc 35'6 (0'0,41'42] local-lis/les=57/58 n=2 ec=49/14 lis/c=57/49 les/c/f=58/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] scrubber<NotActive>: update_scrub_job !!! primary but not scheduled! 
Oct 09 09:38:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 59 pg[6.3( v 41'42 lc 0'0 (0'0,41'42] local-lis/les=57/58 n=2 ec=49/14 lis/c=57/49 les/c/f=58/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] scrubber<NotActive>: update_scrub_job !!! primary but not scheduled! 
Oct 09 09:38:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 59 pg[6.7( v 41'42 lc 35'11 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/49 les/c/f=58/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] scrubber<NotActive>: update_scrub_job !!! primary but not scheduled! 
Oct 09 09:38:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 59 pg[6.d( v 41'42 lc 35'7 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/49 les/c/f=58/50/0 sis=57) [2] r=0 lpr=57 pi=[49,57)/1 crt=41'42 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] scrubber<NotActive>: update_scrub_job !!! primary but not scheduled! 
Oct 09 09:38:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:09 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:38:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:09 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:38:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:09 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:38:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:09 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:38:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:09 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:38:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:09 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:38:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:09 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:38:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:10.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:10 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e60 e60: 3 total, 3 up, 3 in
Oct 09 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.5( v 59'1068 (0'0,59'1068] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=56'1062 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.5( v 59'1068 (0'0,59'1068] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=56'1062 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 60 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:10 compute-2 ceph-mon[5983]: 4.1e scrub starts
Oct 09 09:38:10 compute-2 ceph-mon[5983]: 4.1e scrub ok
Oct 09 09:38:10 compute-2 ceph-mon[5983]: osdmap e60: 3 total, 3 up, 3 in
Oct 09 09:38:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e61 e61: 3 total, 3 up, 3 in
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61 pruub=13.162775993s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'42 mlcod 41'42 active pruub 137.583831787s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[6.f( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61 pruub=13.162708282s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 137.583831787s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.162840843s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'42 mlcod 41'42 active pruub 137.584121704s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[6.b( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=61 pruub=13.162815094s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 137.584121704s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[6.3( v 41'42 (0'0,41'42] local-lis/les=57/58 n=2 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61 pruub=13.163148880s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'42 mlcod 41'42 active pruub 137.584884644s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[6.3( v 41'42 (0'0,41'42] local-lis/les=57/58 n=2 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61 pruub=13.163119316s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 137.584884644s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[6.7( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61 pruub=13.166052818s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'42 mlcod 41'42 active pruub 137.587966919s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[6.7( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=61 pruub=13.165368080s) [0] r=-1 lpr=61 pi=[57,61)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 137.587966919s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.1( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.3( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.5( v 59'1068 (0'0,59'1068] local-lis/les=60/61 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=59'1068 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 61 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=60) [2] r=0 lpr=60 pi=[53,60)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:11 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.9 scrub starts
Oct 09 09:38:11 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.9 scrub ok
Oct 09 09:38:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000018s ======
Oct 09 09:38:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:11.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Oct 09 09:38:11 compute-2 ceph-mon[5983]: 11.15 scrub starts
Oct 09 09:38:11 compute-2 ceph-mon[5983]: 11.15 scrub ok
Oct 09 09:38:11 compute-2 ceph-mon[5983]: pgmap v63: 337 pgs: 337 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:38:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 09 09:38:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct 09 09:38:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 09 09:38:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct 09 09:38:11 compute-2 ceph-mon[5983]: osdmap e61: 3 total, 3 up, 3 in
Oct 09 09:38:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000019s ======
Oct 09 09:38:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:12.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Oct 09 09:38:12 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e62 e62: 3 total, 3 up, 3 in
Oct 09 09:38:12 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 62 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:12 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 62 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:12 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 62 pg[10.11( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:12 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 62 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:12 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 62 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:12 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 62 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:12 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 62 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=58/53 les/c/f=59/55/0 sis=61) [2] r=0 lpr=61 pi=[53,61)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:12 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.e scrub starts
Oct 09 09:38:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093812 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 09 09:38:12 compute-2 ceph-mon[5983]: 11.18 scrub starts
Oct 09 09:38:12 compute-2 ceph-mon[5983]: 11.18 scrub ok
Oct 09 09:38:12 compute-2 ceph-mon[5983]: 12.9 scrub starts
Oct 09 09:38:12 compute-2 ceph-mon[5983]: 12.9 scrub ok
Oct 09 09:38:12 compute-2 ceph-mon[5983]: 10.10 scrub starts
Oct 09 09:38:12 compute-2 ceph-mon[5983]: 10.10 scrub ok
Oct 09 09:38:12 compute-2 ceph-mon[5983]: osdmap e62: 3 total, 3 up, 3 in
Oct 09 09:38:12 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.e scrub ok
Oct 09 09:38:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:13 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e63 e63: 3 total, 3 up, 3 in
Oct 09 09:38:13 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.a deep-scrub starts
Oct 09 09:38:13 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.a deep-scrub ok
Oct 09 09:38:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:13.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:13 compute-2 ceph-mon[5983]: 11.e scrub starts
Oct 09 09:38:13 compute-2 ceph-mon[5983]: 11.e scrub ok
Oct 09 09:38:13 compute-2 ceph-mon[5983]: pgmap v66: 337 pgs: 5 active+recovery_wait+remapped, 1 active+recovering+remapped, 8 remapped+peering, 1 active+remapped, 9 peering, 313 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 22/226 objects misplaced (9.735%); 813 B/s, 2 keys/s, 24 objects/s recovering
Oct 09 09:38:13 compute-2 ceph-mon[5983]: osdmap e63: 3 total, 3 up, 3 in
Oct 09 09:38:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:14.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:14 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.e scrub starts
Oct 09 09:38:14 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.e scrub ok
Oct 09 09:38:14 compute-2 ceph-mon[5983]: 8.a deep-scrub starts
Oct 09 09:38:14 compute-2 ceph-mon[5983]: 8.a deep-scrub ok
Oct 09 09:38:14 compute-2 ceph-mon[5983]: 6.f scrub starts
Oct 09 09:38:14 compute-2 ceph-mon[5983]: 6.f scrub ok
Oct 09 09:38:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:15 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Oct 09 09:38:15 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Oct 09 09:38:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000018s ======
Oct 09 09:38:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:15.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000018s
Oct 09 09:38:15 compute-2 ceph-mon[5983]: 10.e scrub starts
Oct 09 09:38:15 compute-2 ceph-mon[5983]: 10.e scrub ok
Oct 09 09:38:15 compute-2 ceph-mon[5983]: 6.7 scrub starts
Oct 09 09:38:15 compute-2 ceph-mon[5983]: 6.7 scrub ok
Oct 09 09:38:15 compute-2 ceph-mon[5983]: 5.e scrub starts
Oct 09 09:38:15 compute-2 ceph-mon[5983]: 5.e scrub ok
Oct 09 09:38:15 compute-2 ceph-mon[5983]: pgmap v68: 337 pgs: 5 active+recovery_wait+remapped, 1 active+recovering+remapped, 8 remapped+peering, 1 active+remapped, 9 peering, 313 active+clean; 457 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 22/226 objects misplaced (9.735%); 798 B/s, 2 keys/s, 24 objects/s recovering
Oct 09 09:38:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000006:nfs.cephfs.1: -2
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:16.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:16 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.b deep-scrub starts
Oct 09 09:38:16 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.b deep-scrub ok
Oct 09 09:38:16 compute-2 ceph-mon[5983]: 10.1e scrub starts
Oct 09 09:38:16 compute-2 ceph-mon[5983]: 10.1e scrub ok
Oct 09 09:38:16 compute-2 ceph-mon[5983]: 7.12 scrub starts
Oct 09 09:38:16 compute-2 ceph-mon[5983]: 7.12 scrub ok
Oct 09 09:38:16 compute-2 ceph-mon[5983]: 3.8 scrub starts
Oct 09 09:38:16 compute-2 ceph-mon[5983]: 3.8 scrub ok
Oct 09 09:38:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:16 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50cf00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:16 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88001ed0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:17 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.d scrub starts
Oct 09 09:38:17 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.d scrub ok
Oct 09 09:38:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000019s ======
Oct 09 09:38:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:17.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000019s
Oct 09 09:38:17 compute-2 ceph-mon[5983]: 10.16 scrub starts
Oct 09 09:38:17 compute-2 ceph-mon[5983]: 10.16 scrub ok
Oct 09 09:38:17 compute-2 ceph-mon[5983]: 12.14 scrub starts
Oct 09 09:38:17 compute-2 ceph-mon[5983]: 12.14 scrub ok
Oct 09 09:38:17 compute-2 ceph-mon[5983]: 5.b deep-scrub starts
Oct 09 09:38:17 compute-2 ceph-mon[5983]: 5.b deep-scrub ok
Oct 09 09:38:17 compute-2 ceph-mon[5983]: pgmap v69: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 580 B/s, 3 keys/s, 24 objects/s recovering
Oct 09 09:38:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 09 09:38:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct 09 09:38:17 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e64 e64: 3 total, 3 up, 3 in
Oct 09 09:38:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:17 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:18.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:18 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Oct 09 09:38:18 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Oct 09 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 64 pg[10.4( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 64 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 64 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 64 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=64) [2] r=0 lpr=64 pi=[53,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:18 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e65 e65: 3 total, 3 up, 3 in
Oct 09 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 65 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 65 pg[10.1c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 65 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 65 pg[10.14( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 65 pg[10.4( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 65 pg[10.4( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 65 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:18 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 65 pg[10.c( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:18 compute-2 ceph-mon[5983]: 10.2 scrub starts
Oct 09 09:38:18 compute-2 ceph-mon[5983]: 10.2 scrub ok
Oct 09 09:38:18 compute-2 ceph-mon[5983]: 12.1f deep-scrub starts
Oct 09 09:38:18 compute-2 ceph-mon[5983]: 12.1f deep-scrub ok
Oct 09 09:38:18 compute-2 ceph-mon[5983]: 5.d scrub starts
Oct 09 09:38:18 compute-2 ceph-mon[5983]: 5.d scrub ok
Oct 09 09:38:18 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 09 09:38:18 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct 09 09:38:18 compute-2 ceph-mon[5983]: osdmap e64: 3 total, 3 up, 3 in
Oct 09 09:38:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:18 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c001d70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093818 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 09 09:38:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:18 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50cf00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:19 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Oct 09 09:38:19 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Oct 09 09:38:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:19.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e66 e66: 3 total, 3 up, 3 in
Oct 09 09:38:19 compute-2 ceph-mon[5983]: 9.1a scrub starts
Oct 09 09:38:19 compute-2 ceph-mon[5983]: 9.1a scrub ok
Oct 09 09:38:19 compute-2 ceph-mon[5983]: 12.d scrub starts
Oct 09 09:38:19 compute-2 ceph-mon[5983]: 12.d scrub ok
Oct 09 09:38:19 compute-2 ceph-mon[5983]: 5.4 scrub starts
Oct 09 09:38:19 compute-2 ceph-mon[5983]: 5.4 scrub ok
Oct 09 09:38:19 compute-2 ceph-mon[5983]: osdmap e65: 3 total, 3 up, 3 in
Oct 09 09:38:19 compute-2 ceph-mon[5983]: pgmap v72: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 38 B/s, 1 keys/s, 8 objects/s recovering
Oct 09 09:38:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 09 09:38:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct 09 09:38:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 09 09:38:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct 09 09:38:19 compute-2 ceph-mon[5983]: osdmap e66: 3 total, 3 up, 3 in
Oct 09 09:38:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e66 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:19 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff880029d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093819 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:38:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:20.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:20 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[10.5( v 63'1071 (0'0,63'1071] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=66 pruub=14.918619156s) [1] r=-1 lpr=66 pi=[60,66)/1 crt=63'1069 lcod 63'1070 mlcod 63'1070 active pruub 148.426910400s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[10.5( v 63'1071 (0'0,63'1071] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=66 pruub=14.918568611s) [1] r=-1 lpr=66 pi=[60,66)/1 crt=63'1069 lcod 63'1070 mlcod 0'0 unknown NOTIFY pruub 148.426910400s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[6.5( v 41'42 (0'0,41'42] local-lis/les=57/58 n=2 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=66 pruub=12.074834824s) [0] r=-1 lpr=66 pi=[57,66)/1 crt=41'42 mlcod 41'42 active pruub 145.583831787s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[6.5( v 41'42 (0'0,41'42] local-lis/les=57/58 n=2 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=66 pruub=12.074742317s) [0] r=-1 lpr=66 pi=[57,66)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 145.583831787s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=15.917100906s) [1] r=-1 lpr=66 pi=[61,66)/1 crt=40'1059 mlcod 0'0 active pruub 149.426162720s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=66 pruub=14.917779922s) [1] r=-1 lpr=66 pi=[60,66)/1 crt=40'1059 mlcod 0'0 active pruub 148.427368164s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=66 pruub=14.917766571s) [1] r=-1 lpr=66 pi=[60,66)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 148.427368164s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=66 pruub=15.916796684s) [1] r=-1 lpr=66 pi=[61,66)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 149.426162720s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=66 pruub=14.912860870s) [1] r=-1 lpr=66 pi=[60,66)/1 crt=40'1059 mlcod 0'0 active pruub 148.423599243s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=66 pruub=14.912676811s) [1] r=-1 lpr=66 pi=[60,66)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 148.423599243s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[6.d( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=66 pruub=12.076931000s) [0] r=-1 lpr=66 pi=[57,66)/1 crt=41'42 mlcod 41'42 active pruub 145.588119507s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 66 pg[6.d( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/59/0 sis=66 pruub=12.076904297s) [0] r=-1 lpr=66 pi=[57,66)/1 crt=41'42 mlcod 0'0 unknown NOTIFY pruub 145.588119507s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:20 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Oct 09 09:38:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:20 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c008f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:20 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e67 e67: 3 total, 3 up, 3 in
Oct 09 09:38:20 compute-2 ceph-mon[5983]: 8.1a deep-scrub starts
Oct 09 09:38:20 compute-2 ceph-mon[5983]: 8.1a deep-scrub ok
Oct 09 09:38:20 compute-2 ceph-mon[5983]: 7.0 deep-scrub starts
Oct 09 09:38:20 compute-2 ceph-mon[5983]: 7.0 deep-scrub ok
Oct 09 09:38:20 compute-2 ceph-mon[5983]: 3.1d scrub starts
Oct 09 09:38:20 compute-2 ceph-mon[5983]: 3.1d scrub ok
Oct 09 09:38:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:38:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 09 09:38:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] r=0 lpr=67 pi=[60,67)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] r=0 lpr=67 pi=[60,67)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] r=0 lpr=67 pi=[60,67)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] r=0 lpr=67 pi=[60,67)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=67) [1]/[2] r=0 lpr=67 pi=[61,67)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=67) [1]/[2] r=0 lpr=67 pi=[61,67)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.5( v 63'1071 (0'0,63'1071] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] r=0 lpr=67 pi=[60,67)/1 crt=63'1069 lcod 63'1070 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.5( v 63'1071 (0'0,63'1071] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] r=0 lpr=67 pi=[60,67)/1 crt=63'1069 lcod 63'1070 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.4( v 66'1068 (0'0,66'1068] local-lis/les=0/0 n=6 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 luod=0'0 crt=56'1062 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:20 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 67 pg[10.4( v 66'1068 (0'0,66'1068] local-lis/les=0/0 n=6 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 crt=56'1062 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:20 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c008f40 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:21 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e68 e68: 3 total, 3 up, 3 in
Oct 09 09:38:21 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 68 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=68) [2] r=0 lpr=68 pi=[53,68)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:21 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 68 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=68) [2] r=0 lpr=68 pi=[53,68)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:21 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 68 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=68) [2] r=0 lpr=68 pi=[53,68)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:21 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 68 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=68) [2] r=0 lpr=68 pi=[53,68)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:21 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 68 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:21 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 68 pg[10.4( v 66'1068 (0'0,66'1068] local-lis/les=67/68 n=6 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 crt=66'1068 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:21 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 68 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=5 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] async=[1] r=0 lpr=67 pi=[60,67)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:21 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 68 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] async=[1] r=0 lpr=67 pi=[60,67)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:21 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 68 pg[10.5( v 63'1071 (0'0,63'1071] local-lis/les=67/68 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=67) [1]/[2] async=[1] r=0 lpr=67 pi=[60,67)/1 crt=63'1071 lcod 63'1070 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:21 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 68 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=67) [1]/[2] async=[1] r=0 lpr=67 pi=[61,67)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:21.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:21 compute-2 ceph-mon[5983]: 9.1b deep-scrub starts
Oct 09 09:38:21 compute-2 ceph-mon[5983]: 9.1b deep-scrub ok
Oct 09 09:38:21 compute-2 ceph-mon[5983]: 7.7 scrub starts
Oct 09 09:38:21 compute-2 ceph-mon[5983]: 7.7 scrub ok
Oct 09 09:38:21 compute-2 ceph-mon[5983]: 5.1a scrub starts
Oct 09 09:38:21 compute-2 ceph-mon[5983]: 5.1a scrub ok
Oct 09 09:38:21 compute-2 ceph-mon[5983]: pgmap v74: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 38 B/s, 1 keys/s, 8 objects/s recovering
Oct 09 09:38:21 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 09 09:38:21 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct 09 09:38:21 compute-2 ceph-mon[5983]: osdmap e67: 3 total, 3 up, 3 in
Oct 09 09:38:21 compute-2 ceph-mon[5983]: osdmap e68: 3 total, 3 up, 3 in
Oct 09 09:38:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:21 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:22.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:22 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e69 e69: 3 total, 3 up, 3 in
Oct 09 09:38:22 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 69 pg[10.5( v 68'1074 (0'0,68'1074] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69 pruub=15.004076004s) [1] async=[1] r=-1 lpr=69 pi=[60,69)/1 crt=63'1071 lcod 68'1073 mlcod 68'1073 active pruub 150.429351807s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:22 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 69 pg[10.5( v 68'1074 (0'0,68'1074] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69 pruub=15.003991127s) [1] r=-1 lpr=69 pi=[60,69)/1 crt=63'1071 lcod 68'1073 mlcod 0'0 unknown NOTIFY pruub 150.429351807s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:22 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 69 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69 pruub=15.003008842s) [1] async=[1] r=-1 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 40'1059 active pruub 150.429412842s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:22 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 69 pg[10.d( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69 pruub=15.002962112s) [1] r=-1 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 150.429412842s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:22 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 69 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=5 ec=53/34 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.002772331s) [1] async=[1] r=-1 lpr=69 pi=[61,69)/1 crt=40'1059 mlcod 40'1059 active pruub 150.429428101s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:22 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 69 pg[10.1d( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=5 ec=53/34 lis/c=67/61 les/c/f=68/62/0 sis=69 pruub=15.002682686s) [1] r=-1 lpr=69 pi=[61,69)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 150.429428101s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:22 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 69 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=5 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69 pruub=15.002573013s) [1] async=[1] r=-1 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 40'1059 active pruub 150.429428101s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:22 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 69 pg[10.15( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=5 ec=53/34 lis/c=67/60 les/c/f=68/61/0 sis=69 pruub=15.002515793s) [1] r=-1 lpr=69 pi=[60,69)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 150.429428101s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:22 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 69 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=68) [2] r=0 lpr=68 pi=[53,68)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:22 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 69 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=65/53 les/c/f=66/55/0 sis=68) [2] r=0 lpr=68 pi=[53,68)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:22 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.14 deep-scrub starts
Oct 09 09:38:22 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.14 deep-scrub ok
Oct 09 09:38:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:22 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff880029d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:22 compute-2 ceph-mon[5983]: 9.19 scrub starts
Oct 09 09:38:22 compute-2 ceph-mon[5983]: 9.19 scrub ok
Oct 09 09:38:22 compute-2 ceph-mon[5983]: 7.d scrub starts
Oct 09 09:38:22 compute-2 ceph-mon[5983]: 7.d scrub ok
Oct 09 09:38:22 compute-2 ceph-mon[5983]: osdmap e69: 3 total, 3 up, 3 in
Oct 09 09:38:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:22 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c009c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:23 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e70 e70: 3 total, 3 up, 3 in
Oct 09 09:38:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:38:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:23.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:38:23 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Oct 09 09:38:23 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Oct 09 09:38:23 compute-2 ceph-mon[5983]: 9.1e scrub starts
Oct 09 09:38:23 compute-2 ceph-mon[5983]: 9.1e scrub ok
Oct 09 09:38:23 compute-2 ceph-mon[5983]: 12.0 scrub starts
Oct 09 09:38:23 compute-2 ceph-mon[5983]: 12.0 scrub ok
Oct 09 09:38:23 compute-2 ceph-mon[5983]: 10.14 deep-scrub starts
Oct 09 09:38:23 compute-2 ceph-mon[5983]: 10.14 deep-scrub ok
Oct 09 09:38:23 compute-2 ceph-mon[5983]: pgmap v78: 337 pgs: 2 active+recovery_wait+remapped, 4 unknown, 4 remapped+peering, 4 peering, 1 active+recovering, 322 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 7/204 objects misplaced (3.431%); 111 B/s, 2 objects/s recovering
Oct 09 09:38:23 compute-2 ceph-mon[5983]: mgrmap e32: compute-0.lwqgfy(active, since 92s), standbys: compute-2.takdnm, compute-1.etokpp
Oct 09 09:38:23 compute-2 ceph-mon[5983]: osdmap e70: 3 total, 3 up, 3 in
Oct 09 09:38:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:23 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c009c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:24.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e71 e71: 3 total, 3 up, 3 in
Oct 09 09:38:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:24 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c009c50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:24 compute-2 ceph-mon[5983]: 8.1e scrub starts
Oct 09 09:38:24 compute-2 ceph-mon[5983]: 8.1e scrub ok
Oct 09 09:38:24 compute-2 ceph-mon[5983]: 7.1 scrub starts
Oct 09 09:38:24 compute-2 ceph-mon[5983]: 7.1 scrub ok
Oct 09 09:38:24 compute-2 ceph-mon[5983]: 10.1c scrub starts
Oct 09 09:38:24 compute-2 ceph-mon[5983]: 10.1c scrub ok
Oct 09 09:38:24 compute-2 ceph-mon[5983]: osdmap e71: 3 total, 3 up, 3 in
Oct 09 09:38:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:24 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff880036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:25.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:25 compute-2 ceph-mon[5983]: 9.1f scrub starts
Oct 09 09:38:25 compute-2 ceph-mon[5983]: 9.1f scrub ok
Oct 09 09:38:25 compute-2 ceph-mon[5983]: 12.f scrub starts
Oct 09 09:38:25 compute-2 ceph-mon[5983]: 12.f scrub ok
Oct 09 09:38:25 compute-2 ceph-mon[5983]: pgmap v81: 337 pgs: 2 active+recovery_wait+remapped, 4 unknown, 4 remapped+peering, 4 peering, 1 active+recovering, 322 active+clean; 457 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 7/204 objects misplaced (3.431%); 112 B/s, 2 objects/s recovering
Oct 09 09:38:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:25 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00ad50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:26.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:26 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Oct 09 09:38:26 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Oct 09 09:38:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:26 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:26 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e72 e72: 3 total, 3 up, 3 in
Oct 09 09:38:26 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 72 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=72 pruub=8.727802277s) [0] r=-1 lpr=72 pi=[60,72)/1 crt=40'1059 mlcod 0'0 active pruub 148.426910400s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:26 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 72 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=72 pruub=8.727765083s) [0] r=-1 lpr=72 pi=[60,72)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 148.426910400s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:26 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 72 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=72 pruub=9.726618767s) [0] r=-1 lpr=72 pi=[61,72)/1 crt=40'1059 mlcod 0'0 active pruub 149.426177979s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:26 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 72 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=72 pruub=9.726602554s) [0] r=-1 lpr=72 pi=[61,72)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 149.426177979s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:26 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 72 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=72 pruub=8.724181175s) [0] r=-1 lpr=72 pi=[60,72)/1 crt=40'1059 mlcod 0'0 active pruub 148.424346924s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:26 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 72 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=72 pruub=8.724164963s) [0] r=-1 lpr=72 pi=[60,72)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 148.424346924s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:26 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 72 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=72 pruub=8.726887703s) [0] r=-1 lpr=72 pi=[60,72)/1 crt=40'1059 mlcod 0'0 active pruub 148.427352905s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:26 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 72 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=72 pruub=8.726868629s) [0] r=-1 lpr=72 pi=[60,72)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 148.427352905s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:26 compute-2 ceph-mon[5983]: 8.1d deep-scrub starts
Oct 09 09:38:26 compute-2 ceph-mon[5983]: 8.1d deep-scrub ok
Oct 09 09:38:26 compute-2 ceph-mon[5983]: 12.1 scrub starts
Oct 09 09:38:26 compute-2 ceph-mon[5983]: 12.1 scrub ok
Oct 09 09:38:26 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 09 09:38:26 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct 09 09:38:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:26 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00ad50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:27 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.0 deep-scrub starts
Oct 09 09:38:27 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.0 deep-scrub ok
Oct 09 09:38:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:27.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:27 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e73 e73: 3 total, 3 up, 3 in
Oct 09 09:38:27 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 73 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:27 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 73 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=5 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:27 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 73 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=73) [0]/[2] r=0 lpr=73 pi=[61,73)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:27 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 73 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=73) [0]/[2] r=0 lpr=73 pi=[61,73)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:27 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 73 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:27 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 73 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:27 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 73 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:27 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 73 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:27 compute-2 ceph-mon[5983]: 9.1c scrub starts
Oct 09 09:38:27 compute-2 ceph-mon[5983]: 9.1c scrub ok
Oct 09 09:38:27 compute-2 ceph-mon[5983]: 12.5 scrub starts
Oct 09 09:38:27 compute-2 ceph-mon[5983]: 12.5 scrub ok
Oct 09 09:38:27 compute-2 ceph-mon[5983]: 3.11 scrub starts
Oct 09 09:38:27 compute-2 ceph-mon[5983]: 3.11 scrub ok
Oct 09 09:38:27 compute-2 ceph-mon[5983]: pgmap v82: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 147 B/s, 9 objects/s recovering
Oct 09 09:38:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 09 09:38:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct 09 09:38:27 compute-2 ceph-mon[5983]: osdmap e72: 3 total, 3 up, 3 in
Oct 09 09:38:27 compute-2 ceph-mon[5983]: osdmap e73: 3 total, 3 up, 3 in
Oct 09 09:38:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:27 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff880036e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:28.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:28 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Oct 09 09:38:28 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Oct 09 09:38:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:28 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00ad50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:28 compute-2 sudo[21445]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:38:28 compute-2 sudo[21445]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:38:28 compute-2 sudo[21445]: pam_unix(sudo:session): session closed for user root
Oct 09 09:38:28 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e74 e74: 3 total, 3 up, 3 in
Oct 09 09:38:28 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 74 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:28 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 74 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[61,73)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:28 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 74 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:28 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 74 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:28 compute-2 ceph-mon[5983]: 11.0 scrub starts
Oct 09 09:38:28 compute-2 ceph-mon[5983]: 11.0 scrub ok
Oct 09 09:38:28 compute-2 ceph-mon[5983]: 12.1b scrub starts
Oct 09 09:38:28 compute-2 ceph-mon[5983]: 5.0 deep-scrub starts
Oct 09 09:38:28 compute-2 ceph-mon[5983]: 12.1b scrub ok
Oct 09 09:38:28 compute-2 ceph-mon[5983]: 5.0 deep-scrub ok
Oct 09 09:38:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 09 09:38:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct 09 09:38:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 09 09:38:28 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct 09 09:38:28 compute-2 ceph-mon[5983]: osdmap e74: 3 total, 3 up, 3 in
Oct 09 09:38:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:28 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:28 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:38:29 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Oct 09 09:38:29 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Oct 09 09:38:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:29.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e75 e75: 3 total, 3 up, 3 in
Oct 09 09:38:29 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.001338005s) [0] async=[0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 40'1059 active pruub 157.714279175s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:29 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.001278877s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.714279175s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:29 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75 pruub=15.000436783s) [0] async=[0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 40'1059 active pruub 157.713455200s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:29 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75 pruub=15.000324249s) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713455200s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:29 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.000190735s) [0] async=[0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 40'1059 active pruub 157.713485718s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:29 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.000123978s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713485718s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:29 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.995488167s) [0] async=[0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 40'1059 active pruub 157.709838867s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:29 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.995354652s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.709838867s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:29 compute-2 ceph-mon[5983]: 9.2 scrub starts
Oct 09 09:38:29 compute-2 ceph-mon[5983]: 9.2 scrub ok
Oct 09 09:38:29 compute-2 ceph-mon[5983]: 7.15 scrub starts
Oct 09 09:38:29 compute-2 ceph-mon[5983]: 7.15 scrub ok
Oct 09 09:38:29 compute-2 ceph-mon[5983]: 5.8 scrub starts
Oct 09 09:38:29 compute-2 ceph-mon[5983]: 5.8 scrub ok
Oct 09 09:38:29 compute-2 ceph-mon[5983]: pgmap v85: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 148 B/s, 9 objects/s recovering
Oct 09 09:38:29 compute-2 ceph-mon[5983]: osdmap e75: 3 total, 3 up, 3 in
Oct 09 09:38:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:29 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00be50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:38:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:30.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:38:30 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Oct 09 09:38:30 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Oct 09 09:38:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:30 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff880043f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:30 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e76 e76: 3 total, 3 up, 3 in
Oct 09 09:38:30 compute-2 ceph-mon[5983]: 8.0 scrub starts
Oct 09 09:38:30 compute-2 ceph-mon[5983]: 8.0 scrub ok
Oct 09 09:38:30 compute-2 ceph-mon[5983]: 12.16 scrub starts
Oct 09 09:38:30 compute-2 ceph-mon[5983]: 12.16 scrub ok
Oct 09 09:38:30 compute-2 ceph-mon[5983]: 5.12 scrub starts
Oct 09 09:38:30 compute-2 ceph-mon[5983]: 5.12 scrub ok
Oct 09 09:38:30 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 09 09:38:30 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct 09 09:38:30 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 09 09:38:30 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct 09 09:38:30 compute-2 ceph-mon[5983]: osdmap e76: 3 total, 3 up, 3 in
Oct 09 09:38:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:30 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00be50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:30 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76 pruub=9.311874390s) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 active pruub 153.583740234s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:30 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76 pruub=9.311841011s) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.583740234s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:30 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.154195786s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 active pruub 157.426406860s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:30 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.153966904s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.426406860s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:30 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.152080536s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 active pruub 157.424911499s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:30 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.151949883s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424911499s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:31 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e77 e77: 3 total, 3 up, 3 in
Oct 09 09:38:31 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:31 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:31 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:31 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:31.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:31 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Oct 09 09:38:31 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Oct 09 09:38:31 compute-2 ceph-mon[5983]: 9.1 scrub starts
Oct 09 09:38:31 compute-2 ceph-mon[5983]: 9.1 scrub ok
Oct 09 09:38:31 compute-2 ceph-mon[5983]: 12.15 scrub starts
Oct 09 09:38:31 compute-2 ceph-mon[5983]: 12.15 scrub ok
Oct 09 09:38:31 compute-2 ceph-mon[5983]: 5.13 scrub starts
Oct 09 09:38:31 compute-2 ceph-mon[5983]: 5.13 scrub ok
Oct 09 09:38:31 compute-2 ceph-mon[5983]: pgmap v88: 337 pgs: 337 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:38:31 compute-2 ceph-mon[5983]: osdmap e77: 3 total, 3 up, 3 in
Oct 09 09:38:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:31 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:31 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:38:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:31 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:38:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:38:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:32.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:38:32 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e78 e78: 3 total, 3 up, 3 in
Oct 09 09:38:32 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:32 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:32 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00be50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:32 compute-2 ceph-mon[5983]: 11.d deep-scrub starts
Oct 09 09:38:32 compute-2 ceph-mon[5983]: 11.d deep-scrub ok
Oct 09 09:38:32 compute-2 ceph-mon[5983]: 5.1c deep-scrub starts
Oct 09 09:38:32 compute-2 ceph-mon[5983]: 5.1c deep-scrub ok
Oct 09 09:38:32 compute-2 ceph-mon[5983]: 7.1d scrub starts
Oct 09 09:38:32 compute-2 ceph-mon[5983]: 7.1d scrub ok
Oct 09 09:38:32 compute-2 ceph-mon[5983]: osdmap e78: 3 total, 3 up, 3 in
Oct 09 09:38:32 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 09 09:38:32 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct 09 09:38:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:32 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff880043f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:33 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e79 e79: 3 total, 3 up, 3 in
Oct 09 09:38:33 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001540184s) [0] async=[0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 40'1059 active pruub 161.432449341s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:33 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001320839s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432449341s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:33 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001205444s) [0] async=[0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 40'1059 active pruub 161.432983398s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:33 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.000682831s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432983398s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:33.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:33 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Oct 09 09:38:33 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Oct 09 09:38:33 compute-2 ceph-mon[5983]: 8.e scrub starts
Oct 09 09:38:33 compute-2 ceph-mon[5983]: 8.e scrub ok
Oct 09 09:38:33 compute-2 ceph-mon[5983]: pgmap v92: 337 pgs: 2 active+remapped, 335 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.0 KiB/s wr, 2 op/s; 195 B/s, 7 objects/s recovering
Oct 09 09:38:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 09 09:38:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct 09 09:38:33 compute-2 ceph-mon[5983]: osdmap e79: 3 total, 3 up, 3 in
Oct 09 09:38:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:33 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00cef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:38:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:34.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:38:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e80 e80: 3 total, 3 up, 3 in
Oct 09 09:38:34 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Oct 09 09:38:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:34 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:34 compute-2 ceph-mon[5983]: 9.14 scrub starts
Oct 09 09:38:34 compute-2 ceph-mon[5983]: 9.14 scrub ok
Oct 09 09:38:34 compute-2 ceph-mon[5983]: 10.11 scrub starts
Oct 09 09:38:34 compute-2 ceph-mon[5983]: 10.11 scrub ok
Oct 09 09:38:34 compute-2 ceph-mon[5983]: osdmap e80: 3 total, 3 up, 3 in
Oct 09 09:38:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 09 09:38:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct 09 09:38:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e80 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:34 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Oct 09 09:38:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:34 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00cef0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:34 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 09 09:38:35 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e81 e81: 3 total, 3 up, 3 in
Oct 09 09:38:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:35.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:35 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81 pruub=15.830674171s) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 active pruub 164.427581787s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:35 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.827572823s) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 active pruub 157.424896240s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:35 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81 pruub=15.830345154s) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 164.427581787s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:35 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.827487946s) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424896240s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:35 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Oct 09 09:38:35 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Oct 09 09:38:35 compute-2 ceph-mon[5983]: 11.b scrub starts
Oct 09 09:38:35 compute-2 ceph-mon[5983]: 11.b scrub ok
Oct 09 09:38:35 compute-2 ceph-mon[5983]: 10.17 deep-scrub starts
Oct 09 09:38:35 compute-2 ceph-mon[5983]: 10.17 deep-scrub ok
Oct 09 09:38:35 compute-2 ceph-mon[5983]: pgmap v95: 337 pgs: 2 active+remapped, 335 active+clean; 458 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.0 KiB/s wr, 2 op/s; 195 B/s, 7 objects/s recovering
Oct 09 09:38:35 compute-2 ceph-mon[5983]: 10.13 scrub starts
Oct 09 09:38:35 compute-2 ceph-mon[5983]: 10.13 scrub ok
Oct 09 09:38:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:38:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 09 09:38:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct 09 09:38:35 compute-2 ceph-mon[5983]: osdmap e81: 3 total, 3 up, 3 in
Oct 09 09:38:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:35 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff880043f0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:38:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:36.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:38:36 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e82 e82: 3 total, 3 up, 3 in
Oct 09 09:38:36 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:36 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:36 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:36 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:36 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Oct 09 09:38:36 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Oct 09 09:38:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:36 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00cef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:36 compute-2 ceph-mon[5983]: 4.4 scrub starts
Oct 09 09:38:36 compute-2 ceph-mon[5983]: 4.4 scrub ok
Oct 09 09:38:36 compute-2 ceph-mon[5983]: 6.d scrub starts
Oct 09 09:38:36 compute-2 ceph-mon[5983]: 6.d scrub ok
Oct 09 09:38:36 compute-2 ceph-mon[5983]: 10.3 scrub starts
Oct 09 09:38:36 compute-2 ceph-mon[5983]: 10.3 scrub ok
Oct 09 09:38:36 compute-2 ceph-mon[5983]: osdmap e82: 3 total, 3 up, 3 in
Oct 09 09:38:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:36 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:37 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e83 e83: 3 total, 3 up, 3 in
Oct 09 09:38:37 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:37 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:37 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Oct 09 09:38:37 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Oct 09 09:38:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:37.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:37 compute-2 ceph-mon[5983]: 4.7 scrub starts
Oct 09 09:38:37 compute-2 ceph-mon[5983]: 4.7 scrub ok
Oct 09 09:38:37 compute-2 ceph-mon[5983]: 10.7 scrub starts
Oct 09 09:38:37 compute-2 ceph-mon[5983]: 10.7 scrub ok
Oct 09 09:38:37 compute-2 ceph-mon[5983]: 6.1 scrub starts
Oct 09 09:38:37 compute-2 ceph-mon[5983]: 6.1 scrub ok
Oct 09 09:38:37 compute-2 ceph-mon[5983]: pgmap v98: 337 pgs: 4 unknown, 1 peering, 332 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:38:37 compute-2 ceph-mon[5983]: osdmap e83: 3 total, 3 up, 3 in
Oct 09 09:38:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:37 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:38.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:38 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e84 e84: 3 total, 3 up, 3 in
Oct 09 09:38:38 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84 pruub=14.988700867s) [0] async=[0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 40'1059 active pruub 166.435379028s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:38 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84 pruub=14.988006592s) [0] async=[0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 40'1059 active pruub 166.434722900s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:38 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84 pruub=14.988534927s) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.435379028s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:38 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84 pruub=14.987423897s) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.434722900s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:38 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.17 scrub starts
Oct 09 09:38:38 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.17 scrub ok
Oct 09 09:38:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:38 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:38 compute-2 ceph-mon[5983]: 11.10 scrub starts
Oct 09 09:38:38 compute-2 ceph-mon[5983]: 11.10 scrub ok
Oct 09 09:38:38 compute-2 ceph-mon[5983]: 6.8 scrub starts
Oct 09 09:38:38 compute-2 ceph-mon[5983]: 6.8 scrub ok
Oct 09 09:38:38 compute-2 ceph-mon[5983]: 11.17 scrub starts
Oct 09 09:38:38 compute-2 ceph-mon[5983]: 11.17 scrub ok
Oct 09 09:38:38 compute-2 ceph-mon[5983]: osdmap e84: 3 total, 3 up, 3 in
Oct 09 09:38:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:38 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00cef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e85 e85: 3 total, 3 up, 3 in
Oct 09 09:38:39 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Oct 09 09:38:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:38:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:39.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:38:39 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Oct 09 09:38:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e85 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:39 compute-2 ceph-mon[5983]: 8.13 deep-scrub starts
Oct 09 09:38:39 compute-2 ceph-mon[5983]: 8.13 deep-scrub ok
Oct 09 09:38:39 compute-2 ceph-mon[5983]: 9.10 scrub starts
Oct 09 09:38:39 compute-2 ceph-mon[5983]: 9.10 scrub ok
Oct 09 09:38:39 compute-2 ceph-mon[5983]: 12.17 scrub starts
Oct 09 09:38:39 compute-2 ceph-mon[5983]: 12.17 scrub ok
Oct 09 09:38:39 compute-2 ceph-mon[5983]: pgmap v101: 337 pgs: 4 unknown, 1 peering, 332 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:38:39 compute-2 ceph-mon[5983]: osdmap e85: 3 total, 3 up, 3 in
Oct 09 09:38:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:39 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:40.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:40 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Oct 09 09:38:40 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Oct 09 09:38:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:40 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:40 compute-2 ceph-mon[5983]: 9.0 scrub starts
Oct 09 09:38:40 compute-2 ceph-mon[5983]: 9.0 scrub ok
Oct 09 09:38:40 compute-2 ceph-mon[5983]: 11.12 scrub starts
Oct 09 09:38:40 compute-2 ceph-mon[5983]: 11.12 scrub ok
Oct 09 09:38:40 compute-2 ceph-mon[5983]: 11.16 scrub starts
Oct 09 09:38:40 compute-2 ceph-mon[5983]: 11.16 scrub ok
Oct 09 09:38:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:40 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:38:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:41.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:38:41 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Oct 09 09:38:41 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Oct 09 09:38:41 compute-2 ceph-mon[5983]: 8.1 scrub starts
Oct 09 09:38:41 compute-2 ceph-mon[5983]: 8.1 scrub ok
Oct 09 09:38:41 compute-2 ceph-mon[5983]: 5.1f scrub starts
Oct 09 09:38:41 compute-2 ceph-mon[5983]: 5.1f scrub ok
Oct 09 09:38:41 compute-2 ceph-mon[5983]: 4.19 scrub starts
Oct 09 09:38:41 compute-2 ceph-mon[5983]: 4.19 scrub ok
Oct 09 09:38:41 compute-2 ceph-mon[5983]: pgmap v103: 337 pgs: 4 unknown, 1 peering, 332 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:38:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:41 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00cef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093841 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 09 09:38:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:42.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:42 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Oct 09 09:38:42 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Oct 09 09:38:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:42 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:42 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e86 e86: 3 total, 3 up, 3 in
Oct 09 09:38:42 compute-2 ceph-mon[5983]: 11.2 deep-scrub starts
Oct 09 09:38:42 compute-2 ceph-mon[5983]: 11.2 deep-scrub ok
Oct 09 09:38:42 compute-2 ceph-mon[5983]: 9.15 scrub starts
Oct 09 09:38:42 compute-2 ceph-mon[5983]: 9.15 scrub ok
Oct 09 09:38:42 compute-2 ceph-mon[5983]: 11.13 scrub starts
Oct 09 09:38:42 compute-2 ceph-mon[5983]: 11.13 scrub ok
Oct 09 09:38:42 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 09 09:38:42 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct 09 09:38:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:42 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:43 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Oct 09 09:38:43 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Oct 09 09:38:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:43.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:43 compute-2 ceph-mon[5983]: 11.c scrub starts
Oct 09 09:38:43 compute-2 ceph-mon[5983]: 11.c scrub ok
Oct 09 09:38:43 compute-2 ceph-mon[5983]: 5.1b scrub starts
Oct 09 09:38:43 compute-2 ceph-mon[5983]: 5.1b scrub ok
Oct 09 09:38:43 compute-2 ceph-mon[5983]: 4.1c scrub starts
Oct 09 09:38:43 compute-2 ceph-mon[5983]: 4.1c scrub ok
Oct 09 09:38:43 compute-2 ceph-mon[5983]: pgmap v104: 337 pgs: 337 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 36 op/s; 36 B/s, 4 objects/s recovering
Oct 09 09:38:43 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 09 09:38:43 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct 09 09:38:43 compute-2 ceph-mon[5983]: osdmap e86: 3 total, 3 up, 3 in
Oct 09 09:38:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:43 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:44 compute-2 sudo[21514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:38:44 compute-2 sudo[21514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:38:44 compute-2 sudo[21514]: pam_unix(sudo:session): session closed for user root
Oct 09 09:38:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:44.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:44 compute-2 sudo[21539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:38:44 compute-2 sudo[21539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:38:44 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct 09 09:38:44 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct 09 09:38:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:44 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e87 e87: 3 total, 3 up, 3 in
Oct 09 09:38:44 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86 pruub=9.621111870s) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 active pruub 167.427932739s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:44 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86 pruub=9.621063232s) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 167.427932739s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:44 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86 pruub=8.616195679s) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 active pruub 166.424423218s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:44 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86 pruub=8.616158485s) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.424423218s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:44 compute-2 ceph-mon[5983]: 9.c scrub starts
Oct 09 09:38:44 compute-2 ceph-mon[5983]: 9.c scrub ok
Oct 09 09:38:44 compute-2 ceph-mon[5983]: 8.14 scrub starts
Oct 09 09:38:44 compute-2 ceph-mon[5983]: 8.14 scrub ok
Oct 09 09:38:44 compute-2 ceph-mon[5983]: 8.15 scrub starts
Oct 09 09:38:44 compute-2 ceph-mon[5983]: 8.15 scrub ok
Oct 09 09:38:44 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 09 09:38:44 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct 09 09:38:44 compute-2 sudo[21539]: pam_unix(sudo:session): session closed for user root
Oct 09 09:38:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:44 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00cef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:45 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Oct 09 09:38:45 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Oct 09 09:38:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:45.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:45 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e88 e88: 3 total, 3 up, 3 in
Oct 09 09:38:45 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:45 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:45 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:45 compute-2 ceph-mon[5983]: 11.9 scrub starts
Oct 09 09:38:45 compute-2 ceph-mon[5983]: 11.9 scrub ok
Oct 09 09:38:45 compute-2 ceph-mon[5983]: 5.18 scrub starts
Oct 09 09:38:45 compute-2 ceph-mon[5983]: 5.18 scrub ok
Oct 09 09:38:45 compute-2 ceph-mon[5983]: 4.1f scrub starts
Oct 09 09:38:45 compute-2 ceph-mon[5983]: 4.1f scrub ok
Oct 09 09:38:45 compute-2 ceph-mon[5983]: pgmap v106: 337 pgs: 337 active+clean; 458 KiB data, 143 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 0 B/s wr, 35 op/s; 35 B/s, 4 objects/s recovering
Oct 09 09:38:45 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 09 09:38:45 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct 09 09:38:45 compute-2 ceph-mon[5983]: osdmap e87: 3 total, 3 up, 3 in
Oct 09 09:38:45 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:45 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00cef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:46.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:46 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Oct 09 09:38:46 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Oct 09 09:38:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:46 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:46 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e89 e89: 3 total, 3 up, 3 in
Oct 09 09:38:46 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:46 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:46 compute-2 ceph-mon[5983]: 9.4 scrub starts
Oct 09 09:38:46 compute-2 ceph-mon[5983]: 9.4 scrub ok
Oct 09 09:38:46 compute-2 ceph-mon[5983]: 3.1c scrub starts
Oct 09 09:38:46 compute-2 ceph-mon[5983]: 3.1c scrub ok
Oct 09 09:38:46 compute-2 ceph-mon[5983]: 9.1d scrub starts
Oct 09 09:38:46 compute-2 ceph-mon[5983]: 9.1d scrub ok
Oct 09 09:38:46 compute-2 ceph-mon[5983]: osdmap e88: 3 total, 3 up, 3 in
Oct 09 09:38:46 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:38:46 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:38:46 compute-2 ceph-mon[5983]: osdmap e89: 3 total, 3 up, 3 in
Oct 09 09:38:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:46 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:47.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:47 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.18 deep-scrub starts
Oct 09 09:38:47 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.18 deep-scrub ok
Oct 09 09:38:47 compute-2 sudo[21252]: pam_unix(sudo:session): session closed for user root
Oct 09 09:38:47 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e90 e90: 3 total, 3 up, 3 in
Oct 09 09:38:47 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=14.987930298s) [0] async=[0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 40'1059 active pruub 175.825469971s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:47 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.986706734s) [0] async=[0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 40'1059 active pruub 175.824310303s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:47 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.986626625s) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.824310303s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:47 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=14.987289429s) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.825469971s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:47 compute-2 ceph-mon[5983]: 11.6 scrub starts
Oct 09 09:38:47 compute-2 ceph-mon[5983]: 11.6 scrub ok
Oct 09 09:38:47 compute-2 ceph-mon[5983]: 5.15 scrub starts
Oct 09 09:38:47 compute-2 ceph-mon[5983]: 5.15 scrub ok
Oct 09 09:38:47 compute-2 ceph-mon[5983]: 8.1c scrub starts
Oct 09 09:38:47 compute-2 ceph-mon[5983]: 8.1c scrub ok
Oct 09 09:38:47 compute-2 ceph-mon[5983]: pgmap v109: 337 pgs: 4 remapped+peering, 333 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 33 op/s; 36 B/s, 4 objects/s recovering
Oct 09 09:38:47 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:38:47 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:38:47 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:38:47 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:38:47 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:38:47 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:38:47 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:38:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:47 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00cef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:48.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:48 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Oct 09 09:38:48 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Oct 09 09:38:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:48 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff94001080 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:48 compute-2 sudo[21624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:38:48 compute-2 sudo[21624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:38:48 compute-2 sudo[21624]: pam_unix(sudo:session): session closed for user root
Oct 09 09:38:48 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e91 e91: 3 total, 3 up, 3 in
Oct 09 09:38:48 compute-2 ceph-mon[5983]: 4.b scrub starts
Oct 09 09:38:48 compute-2 ceph-mon[5983]: 4.b scrub ok
Oct 09 09:38:48 compute-2 ceph-mon[5983]: 11.1b deep-scrub starts
Oct 09 09:38:48 compute-2 ceph-mon[5983]: 11.1b deep-scrub ok
Oct 09 09:38:48 compute-2 ceph-mon[5983]: 12.18 deep-scrub starts
Oct 09 09:38:48 compute-2 ceph-mon[5983]: 12.18 deep-scrub ok
Oct 09 09:38:48 compute-2 ceph-mon[5983]: osdmap e90: 3 total, 3 up, 3 in
Oct 09 09:38:48 compute-2 ceph-mon[5983]: osdmap e91: 3 total, 3 up, 3 in
Oct 09 09:38:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:48 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:49 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1a scrub starts
Oct 09 09:38:49 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1a scrub ok
Oct 09 09:38:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:49.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:49 compute-2 ceph-mon[5983]: 8.7 scrub starts
Oct 09 09:38:49 compute-2 ceph-mon[5983]: 8.7 scrub ok
Oct 09 09:38:49 compute-2 ceph-mon[5983]: 8.19 scrub starts
Oct 09 09:38:49 compute-2 ceph-mon[5983]: 8.19 scrub ok
Oct 09 09:38:49 compute-2 ceph-mon[5983]: 7.11 scrub starts
Oct 09 09:38:49 compute-2 ceph-mon[5983]: 7.11 scrub ok
Oct 09 09:38:49 compute-2 ceph-mon[5983]: pgmap v112: 337 pgs: 4 remapped+peering, 333 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:38:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:49 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c001320 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:49 compute-2 sudo[21650]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:38:49 compute-2 sudo[21650]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:38:49 compute-2 sudo[21650]: pam_unix(sudo:session): session closed for user root
Oct 09 09:38:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:38:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:50.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:38:50 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.11 scrub starts
Oct 09 09:38:50 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.11 scrub ok
Oct 09 09:38:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:50 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00cef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:50 compute-2 ceph-mon[5983]: 11.1f scrub starts
Oct 09 09:38:50 compute-2 ceph-mon[5983]: 3.a scrub starts
Oct 09 09:38:50 compute-2 ceph-mon[5983]: 3.a scrub ok
Oct 09 09:38:50 compute-2 ceph-mon[5983]: 11.1f scrub ok
Oct 09 09:38:50 compute-2 ceph-mon[5983]: 12.1a scrub starts
Oct 09 09:38:50 compute-2 ceph-mon[5983]: 12.1a scrub ok
Oct 09 09:38:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:38:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:38:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:38:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:50 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c00cef0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Oct 09 09:38:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Oct 09 09:38:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:51.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:51 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:51 compute-2 ceph-mon[5983]: 4.10 scrub starts
Oct 09 09:38:51 compute-2 ceph-mon[5983]: 4.10 scrub ok
Oct 09 09:38:51 compute-2 ceph-mon[5983]: 4.d scrub starts
Oct 09 09:38:51 compute-2 ceph-mon[5983]: 4.d scrub ok
Oct 09 09:38:51 compute-2 ceph-mon[5983]: 12.11 scrub starts
Oct 09 09:38:51 compute-2 ceph-mon[5983]: 12.11 scrub ok
Oct 09 09:38:51 compute-2 ceph-mon[5983]: pgmap v114: 337 pgs: 4 remapped+peering, 333 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:38:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:52.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:52 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Oct 09 09:38:52 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Oct 09 09:38:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:52 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c001e20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:52 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e92 e92: 3 total, 3 up, 3 in
Oct 09 09:38:52 compute-2 ceph-mon[5983]: 12.1c scrub starts
Oct 09 09:38:52 compute-2 ceph-mon[5983]: 12.1c scrub ok
Oct 09 09:38:52 compute-2 ceph-mon[5983]: 11.1c deep-scrub starts
Oct 09 09:38:52 compute-2 ceph-mon[5983]: 11.1c deep-scrub ok
Oct 09 09:38:52 compute-2 ceph-mon[5983]: 9.5 scrub starts
Oct 09 09:38:52 compute-2 ceph-mon[5983]: 9.5 scrub ok
Oct 09 09:38:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 09 09:38:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct 09 09:38:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:52 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c001e20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:53 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Oct 09 09:38:53 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Oct 09 09:38:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:53.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:53 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c001e20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:53 compute-2 ceph-mon[5983]: 12.12 scrub starts
Oct 09 09:38:53 compute-2 ceph-mon[5983]: 12.12 scrub ok
Oct 09 09:38:53 compute-2 ceph-mon[5983]: 11.5 scrub starts
Oct 09 09:38:53 compute-2 ceph-mon[5983]: 11.5 scrub ok
Oct 09 09:38:53 compute-2 ceph-mon[5983]: 4.8 scrub starts
Oct 09 09:38:53 compute-2 ceph-mon[5983]: 4.8 scrub ok
Oct 09 09:38:53 compute-2 ceph-mon[5983]: pgmap v115: 337 pgs: 337 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 73 B/s, 4 objects/s recovering
Oct 09 09:38:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 09 09:38:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct 09 09:38:53 compute-2 ceph-mon[5983]: osdmap e92: 3 total, 3 up, 3 in
Oct 09 09:38:53 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e93 e93: 3 total, 3 up, 3 in
Oct 09 09:38:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:54.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:54 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Oct 09 09:38:54 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Oct 09 09:38:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:54 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:54 compute-2 ceph-mon[5983]: 7.e scrub starts
Oct 09 09:38:54 compute-2 ceph-mon[5983]: 7.e scrub ok
Oct 09 09:38:54 compute-2 ceph-mon[5983]: 4.a deep-scrub starts
Oct 09 09:38:54 compute-2 ceph-mon[5983]: 4.a deep-scrub ok
Oct 09 09:38:54 compute-2 ceph-mon[5983]: 7.14 scrub starts
Oct 09 09:38:54 compute-2 ceph-mon[5983]: 7.14 scrub ok
Oct 09 09:38:54 compute-2 ceph-mon[5983]: osdmap e93: 3 total, 3 up, 3 in
Oct 09 09:38:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 09 09:38:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct 09 09:38:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e94 e94: 3 total, 3 up, 3 in
Oct 09 09:38:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:54 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:54 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c001e20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:55 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.3 scrub starts
Oct 09 09:38:55 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.3 scrub ok
Oct 09 09:38:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:55.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:55 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c001e20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:55 compute-2 ceph-mon[5983]: 12.10 scrub starts
Oct 09 09:38:55 compute-2 ceph-mon[5983]: 12.10 scrub ok
Oct 09 09:38:55 compute-2 ceph-mon[5983]: 11.1a scrub starts
Oct 09 09:38:55 compute-2 ceph-mon[5983]: 11.1a scrub ok
Oct 09 09:38:55 compute-2 ceph-mon[5983]: 7.1f scrub starts
Oct 09 09:38:55 compute-2 ceph-mon[5983]: 7.1f scrub ok
Oct 09 09:38:55 compute-2 ceph-mon[5983]: pgmap v118: 337 pgs: 337 active+clean; 458 KiB data, 147 MiB used, 60 GiB / 60 GiB avail; 73 B/s, 4 objects/s recovering
Oct 09 09:38:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 09 09:38:55 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct 09 09:38:55 compute-2 ceph-mon[5983]: osdmap e94: 3 total, 3 up, 3 in
Oct 09 09:38:55 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e95 e95: 3 total, 3 up, 3 in
Oct 09 09:38:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:55 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:38:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.002000026s ======
Oct 09 09:38:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:56.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000026s
Oct 09 09:38:56 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.b scrub starts
Oct 09 09:38:56 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.b scrub ok
Oct 09 09:38:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:56 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c001e20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:56 compute-2 ceph-mon[5983]: 5.1d deep-scrub starts
Oct 09 09:38:56 compute-2 ceph-mon[5983]: 5.1d deep-scrub ok
Oct 09 09:38:56 compute-2 ceph-mon[5983]: 9.a scrub starts
Oct 09 09:38:56 compute-2 ceph-mon[5983]: 9.a scrub ok
Oct 09 09:38:56 compute-2 ceph-mon[5983]: 12.3 scrub starts
Oct 09 09:38:56 compute-2 ceph-mon[5983]: 12.3 scrub ok
Oct 09 09:38:56 compute-2 ceph-mon[5983]: osdmap e95: 3 total, 3 up, 3 in
Oct 09 09:38:56 compute-2 ceph-mon[5983]: 3.12 deep-scrub starts
Oct 09 09:38:56 compute-2 ceph-mon[5983]: 3.12 deep-scrub ok
Oct 09 09:38:56 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e96 e96: 3 total, 3 up, 3 in
Oct 09 09:38:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:56 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c001e20 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:57 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Oct 09 09:38:57 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Oct 09 09:38:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:38:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:57.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:38:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:57 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff98002600 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:57 compute-2 ceph-mon[5983]: 9.d deep-scrub starts
Oct 09 09:38:57 compute-2 ceph-mon[5983]: 9.d deep-scrub ok
Oct 09 09:38:57 compute-2 ceph-mon[5983]: 8.b scrub starts
Oct 09 09:38:57 compute-2 ceph-mon[5983]: 8.b scrub ok
Oct 09 09:38:57 compute-2 ceph-mon[5983]: pgmap v121: 337 pgs: 2 unknown, 1 peering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:38:57 compute-2 ceph-mon[5983]: osdmap e96: 3 total, 3 up, 3 in
Oct 09 09:38:57 compute-2 ceph-mon[5983]: 12.19 deep-scrub starts
Oct 09 09:38:57 compute-2 ceph-mon[5983]: 12.19 deep-scrub ok
Oct 09 09:38:57 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e97 e97: 3 total, 3 up, 3 in
Oct 09 09:38:57 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:57 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:38:57 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:57 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:38:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:38:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:38:58.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:38:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093858 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:38:58 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.13 deep-scrub starts
Oct 09 09:38:58 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.13 deep-scrub ok
Oct 09 09:38:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:58 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:58 compute-2 ceph-mon[5983]: 5.1 scrub starts
Oct 09 09:38:58 compute-2 ceph-mon[5983]: 5.1 scrub ok
Oct 09 09:38:58 compute-2 ceph-mon[5983]: 8.11 scrub starts
Oct 09 09:38:58 compute-2 ceph-mon[5983]: 8.11 scrub ok
Oct 09 09:38:58 compute-2 ceph-mon[5983]: osdmap e97: 3 total, 3 up, 3 in
Oct 09 09:38:58 compute-2 ceph-mon[5983]: 5.19 scrub starts
Oct 09 09:38:58 compute-2 ceph-mon[5983]: 5.19 scrub ok
Oct 09 09:38:58 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e98 e98: 3 total, 3 up, 3 in
Oct 09 09:38:58 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:58 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:38:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:58 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:59 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Oct 09 09:38:59 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Oct 09 09:38:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:38:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:38:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:38:59.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:38:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:38:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:38:59 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:38:59 compute-2 ceph-mon[5983]: 3.5 scrub starts
Oct 09 09:38:59 compute-2 ceph-mon[5983]: 3.5 scrub ok
Oct 09 09:38:59 compute-2 ceph-mon[5983]: 9.13 deep-scrub starts
Oct 09 09:38:59 compute-2 ceph-mon[5983]: 9.13 deep-scrub ok
Oct 09 09:38:59 compute-2 ceph-mon[5983]: pgmap v124: 337 pgs: 2 unknown, 1 peering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:38:59 compute-2 ceph-mon[5983]: osdmap e98: 3 total, 3 up, 3 in
Oct 09 09:38:59 compute-2 ceph-mon[5983]: 5.3 scrub starts
Oct 09 09:38:59 compute-2 ceph-mon[5983]: 5.3 scrub ok
Oct 09 09:38:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:38:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:38:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:38:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:00.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:00 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.f scrub starts
Oct 09 09:39:00 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.f scrub ok
Oct 09 09:39:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff98003140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:00 compute-2 ceph-mon[5983]: 3.d scrub starts
Oct 09 09:39:00 compute-2 ceph-mon[5983]: 3.d scrub ok
Oct 09 09:39:00 compute-2 ceph-mon[5983]: 8.5 scrub starts
Oct 09 09:39:00 compute-2 ceph-mon[5983]: 8.5 scrub ok
Oct 09 09:39:00 compute-2 ceph-mon[5983]: 5.17 scrub starts
Oct 09 09:39:00 compute-2 ceph-mon[5983]: 5.17 scrub ok
Oct 09 09:39:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:00 compute-2 sudo[21813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctevhfpgsmbefkngysfrfpniuluwcnqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002740.5780792-344-86154767829950/AnsiballZ_command.py'
Oct 09 09:39:00 compute-2 sudo[21813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:00 compute-2 python3.9[21815]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:39:01 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Oct 09 09:39:01 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Oct 09 09:39:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:01.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:01 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:01 compute-2 ceph-mon[5983]: 11.7 scrub starts
Oct 09 09:39:01 compute-2 ceph-mon[5983]: 11.7 scrub ok
Oct 09 09:39:01 compute-2 ceph-mon[5983]: 8.f scrub starts
Oct 09 09:39:01 compute-2 ceph-mon[5983]: 8.f scrub ok
Oct 09 09:39:01 compute-2 ceph-mon[5983]: pgmap v126: 337 pgs: 2 unknown, 1 peering, 334 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:01 compute-2 ceph-mon[5983]: 12.a deep-scrub starts
Oct 09 09:39:01 compute-2 ceph-mon[5983]: 12.a deep-scrub ok
Oct 09 09:39:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:01 compute-2 sudo[21813]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:02.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:02 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Oct 09 09:39:02 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Oct 09 09:39:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:02 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:02 compute-2 sudo[22102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwqfrfhfifivvymgrfwuunpnzaljgkpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002742.0276315-369-154596419288748/AnsiballZ_selinux.py'
Oct 09 09:39:02 compute-2 sudo[22102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:02 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e99 e99: 3 total, 3 up, 3 in
Oct 09 09:39:02 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:39:02 compute-2 ceph-mon[5983]: 11.4 scrub starts
Oct 09 09:39:02 compute-2 ceph-mon[5983]: 11.4 scrub ok
Oct 09 09:39:02 compute-2 ceph-mon[5983]: 4.9 scrub starts
Oct 09 09:39:02 compute-2 ceph-mon[5983]: 4.9 scrub ok
Oct 09 09:39:02 compute-2 ceph-mon[5983]: 7.3 scrub starts
Oct 09 09:39:02 compute-2 ceph-mon[5983]: 7.3 scrub ok
Oct 09 09:39:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct 09 09:39:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:02 compute-2 python3.9[22104]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct 09 09:39:02 compute-2 sudo[22102]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:02 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff98003140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:39:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:03.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:39:03 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1e scrub starts
Oct 09 09:39:03 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1e scrub ok
Oct 09 09:39:03 compute-2 sudo[22254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vanfakhbuxpheziphfgnplbodnafcnlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002743.1374245-401-118378811435988/AnsiballZ_command.py'
Oct 09 09:39:03 compute-2 sudo[22254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:03 compute-2 python3.9[22256]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct 09 09:39:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:03 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c003af0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:03 compute-2 sudo[22254]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:03 compute-2 ceph-mon[5983]: 5.9 scrub starts
Oct 09 09:39:03 compute-2 ceph-mon[5983]: 5.9 scrub ok
Oct 09 09:39:03 compute-2 ceph-mon[5983]: 11.19 scrub starts
Oct 09 09:39:03 compute-2 ceph-mon[5983]: 11.19 scrub ok
Oct 09 09:39:03 compute-2 ceph-mon[5983]: pgmap v127: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 21 op/s; 212 B/s, 6 objects/s recovering
Oct 09 09:39:03 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct 09 09:39:03 compute-2 ceph-mon[5983]: osdmap e99: 3 total, 3 up, 3 in
Oct 09 09:39:03 compute-2 ceph-mon[5983]: 5.14 scrub starts
Oct 09 09:39:03 compute-2 ceph-mon[5983]: 5.14 scrub ok
Oct 09 09:39:03 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e100 e100: 3 total, 3 up, 3 in
Oct 09 09:39:03 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:03 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:39:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:03 compute-2 sudo[22407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzqbvoscxwbhaoctvkdtvffuwksulmdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002743.669393-425-62376432362282/AnsiballZ_file.py'
Oct 09 09:39:03 compute-2 sudo[22407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:04 compute-2 python3.9[22409]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:39:04 compute-2 sudo[22407]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:04.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:04 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Oct 09 09:39:04 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Oct 09 09:39:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:04 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa0003820 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e100 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:39:04 compute-2 sudo[22560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqbfznxbscsapkdneoehsxqnltadcsmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002744.2330523-449-62315518979020/AnsiballZ_mount.py'
Oct 09 09:39:04 compute-2 sudo[22560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:04 compute-2 ceph-mon[5983]: 11.1 scrub starts
Oct 09 09:39:04 compute-2 ceph-mon[5983]: 11.1 scrub ok
Oct 09 09:39:04 compute-2 ceph-mon[5983]: 12.1e scrub starts
Oct 09 09:39:04 compute-2 ceph-mon[5983]: 12.1e scrub ok
Oct 09 09:39:04 compute-2 ceph-mon[5983]: osdmap e100: 3 total, 3 up, 3 in
Oct 09 09:39:04 compute-2 ceph-mon[5983]: 3.7 scrub starts
Oct 09 09:39:04 compute-2 ceph-mon[5983]: 3.7 scrub ok
Oct 09 09:39:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct 09 09:39:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:39:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e101 e101: 3 total, 3 up, 3 in
Oct 09 09:39:04 compute-2 python3.9[22562]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct 09 09:39:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:04 compute-2 sudo[22560]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:04 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:05.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Oct 09 09:39:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Oct 09 09:39:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:05 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff98003140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:05 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e102 e102: 3 total, 3 up, 3 in
Oct 09 09:39:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:39:05 compute-2 ceph-mon[5983]: 3.3 scrub starts
Oct 09 09:39:05 compute-2 ceph-mon[5983]: 3.3 scrub ok
Oct 09 09:39:05 compute-2 ceph-mon[5983]: pgmap v130: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 21 op/s; 212 B/s, 6 objects/s recovering
Oct 09 09:39:05 compute-2 ceph-mon[5983]: 11.8 scrub starts
Oct 09 09:39:05 compute-2 ceph-mon[5983]: 11.8 scrub ok
Oct 09 09:39:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct 09 09:39:05 compute-2 ceph-mon[5983]: osdmap e101: 3 total, 3 up, 3 in
Oct 09 09:39:05 compute-2 ceph-mon[5983]: 3.b scrub starts
Oct 09 09:39:05 compute-2 ceph-mon[5983]: 3.b scrub ok
Oct 09 09:39:05 compute-2 sudo[22713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvoltisaoowqcbmqccjhtnlkodgmrezc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002745.47615-534-87870870163144/AnsiballZ_file.py'
Oct 09 09:39:05 compute-2 sudo[22713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:05 compute-2 python3.9[22715]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:39:05 compute-2 sudo[22713]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:06.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:06 compute-2 sudo[22866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxjgxoueentxesgawveccgxamcjitoqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002746.0222316-558-184461912565859/AnsiballZ_stat.py'
Oct 09 09:39:06 compute-2 sudo[22866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:06 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:39:06 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.13 scrub starts
Oct 09 09:39:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:06 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c003c70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:06 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.13 scrub ok
Oct 09 09:39:06 compute-2 python3.9[22868]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:39:06 compute-2 sudo[22866]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:06 compute-2 sudo[22944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnthdzbmjpptayiusqcdrbcmmitmivfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002746.0222316-558-184461912565859/AnsiballZ_file.py'
Oct 09 09:39:06 compute-2 sudo[22944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:06 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e103 e103: 3 total, 3 up, 3 in
Oct 09 09:39:06 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:39:06 compute-2 ceph-mon[5983]: 9.e scrub starts
Oct 09 09:39:06 compute-2 ceph-mon[5983]: 9.e scrub ok
Oct 09 09:39:06 compute-2 ceph-mon[5983]: 9.16 scrub starts
Oct 09 09:39:06 compute-2 ceph-mon[5983]: 9.16 scrub ok
Oct 09 09:39:06 compute-2 ceph-mon[5983]: osdmap e102: 3 total, 3 up, 3 in
Oct 09 09:39:06 compute-2 ceph-mon[5983]: 5.6 scrub starts
Oct 09 09:39:06 compute-2 ceph-mon[5983]: 5.6 scrub ok
Oct 09 09:39:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct 09 09:39:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct 09 09:39:06 compute-2 ceph-mon[5983]: osdmap e103: 3 total, 3 up, 3 in
Oct 09 09:39:06 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=102/103 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:39:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:06 compute-2 python3.9[22946]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:39:06 compute-2 sudo[22944]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:06 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c003c70 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:07.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:07 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1 deep-scrub starts
Oct 09 09:39:07 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1 deep-scrub ok
Oct 09 09:39:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:07 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00057d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:07 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e104 e104: 3 total, 3 up, 3 in
Oct 09 09:39:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:07 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:39:07 compute-2 ceph-mon[5983]: 5.2 scrub starts
Oct 09 09:39:07 compute-2 ceph-mon[5983]: 5.2 scrub ok
Oct 09 09:39:07 compute-2 ceph-mon[5983]: pgmap v133: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Oct 09 09:39:07 compute-2 ceph-mon[5983]: 12.13 scrub starts
Oct 09 09:39:07 compute-2 ceph-mon[5983]: 12.13 scrub ok
Oct 09 09:39:07 compute-2 ceph-mon[5983]: 7.4 scrub starts
Oct 09 09:39:07 compute-2 ceph-mon[5983]: 7.4 scrub ok
Oct 09 09:39:07 compute-2 ceph-mon[5983]: osdmap e104: 3 total, 3 up, 3 in
Oct 09 09:39:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:07 compute-2 sudo[23097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyfpvchgvfiwjllhtgnygdmurylxmubr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002747.598082-630-43729098745589/AnsiballZ_getent.py'
Oct 09 09:39:07 compute-2 sudo[23097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:08 compute-2 python3.9[23099]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct 09 09:39:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:39:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:08.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:39:08 compute-2 sudo[23097]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:08 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Oct 09 09:39:08 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Oct 09 09:39:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:08 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00057d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:08 compute-2 sudo[23149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:39:08 compute-2 sudo[23149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:39:08 compute-2 sudo[23149]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:08 compute-2 sudo[23276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgvjzrycvdtexhqvuyiyehquhebkfhdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002748.3843381-660-279131846032748/AnsiballZ_getent.py'
Oct 09 09:39:08 compute-2 sudo[23276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:08 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e105 e105: 3 total, 3 up, 3 in
Oct 09 09:39:08 compute-2 ceph-mon[5983]: 11.f scrub starts
Oct 09 09:39:08 compute-2 ceph-mon[5983]: 11.f scrub ok
Oct 09 09:39:08 compute-2 ceph-mon[5983]: 4.1 deep-scrub starts
Oct 09 09:39:08 compute-2 ceph-mon[5983]: 4.1 deep-scrub ok
Oct 09 09:39:08 compute-2 ceph-mon[5983]: 7.9 scrub starts
Oct 09 09:39:08 compute-2 ceph-mon[5983]: 7.9 scrub ok
Oct 09 09:39:08 compute-2 ceph-mon[5983]: 5.7 deep-scrub starts
Oct 09 09:39:08 compute-2 ceph-mon[5983]: 5.7 deep-scrub ok
Oct 09 09:39:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct 09 09:39:08 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct 09 09:39:08 compute-2 ceph-mon[5983]: osdmap e105: 3 total, 3 up, 3 in
Oct 09 09:39:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:08 compute-2 python3.9[23278]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct 09 09:39:08 compute-2 sudo[23276]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:08 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105 pruub=14.949655533s) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 active pruub 197.427154541s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105 pruub=14.949279785s) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 197.427154541s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:39:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:09 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:39:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:09 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:39:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:09.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:09 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.6 deep-scrub starts
Oct 09 09:39:09 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.6 deep-scrub ok
Oct 09 09:39:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:39:09 compute-2 sudo[23429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ducfltwtzoiyswitzxmotpelodtoguje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002748.967066-684-280968286397958/AnsiballZ_group.py'
Oct 09 09:39:09 compute-2 sudo[23429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:09 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff980045b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:09 compute-2 python3.9[23431]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 09 09:39:09 compute-2 sudo[23429]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:09 compute-2 ceph-mon[5983]: pgmap v136: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Oct 09 09:39:09 compute-2 ceph-mon[5983]: 7.5 scrub starts
Oct 09 09:39:09 compute-2 ceph-mon[5983]: 7.5 scrub ok
Oct 09 09:39:09 compute-2 ceph-mon[5983]: 5.5 scrub starts
Oct 09 09:39:09 compute-2 ceph-mon[5983]: 5.5 scrub ok
Oct 09 09:39:09 compute-2 ceph-mon[5983]: 5.16 scrub starts
Oct 09 09:39:09 compute-2 ceph-mon[5983]: 5.16 scrub ok
Oct 09 09:39:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e106 e106: 3 total, 3 up, 3 in
Oct 09 09:39:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:39:09 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:39:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:10 compute-2 sudo[23582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifuuhnfotkbncntjnntszckwhxepeqtt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002749.8603039-710-139982840743993/AnsiballZ_file.py'
Oct 09 09:39:10 compute-2 sudo[23582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:10.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:10 compute-2 python3.9[23584]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct 09 09:39:10 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.a scrub starts
Oct 09 09:39:10 compute-2 sudo[23582]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:10 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.a scrub ok
Oct 09 09:39:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:10 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00057d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:10 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e107 e107: 3 total, 3 up, 3 in
Oct 09 09:39:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107 pruub=15.393718719s) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 active pruub 199.428039551s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107 pruub=15.393673897s) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 199.428039551s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:39:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:39:10 compute-2 ceph-mon[5983]: 4.6 deep-scrub starts
Oct 09 09:39:10 compute-2 ceph-mon[5983]: 4.6 deep-scrub ok
Oct 09 09:39:10 compute-2 ceph-mon[5983]: osdmap e106: 3 total, 3 up, 3 in
Oct 09 09:39:10 compute-2 ceph-mon[5983]: 5.1e scrub starts
Oct 09 09:39:10 compute-2 ceph-mon[5983]: 5.1e scrub ok
Oct 09 09:39:10 compute-2 ceph-mon[5983]: 8.8 scrub starts
Oct 09 09:39:10 compute-2 ceph-mon[5983]: 8.8 scrub ok
Oct 09 09:39:10 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct 09 09:39:10 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:39:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:10 compute-2 sudo[23735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gapnijiwxlfaecpmpzyulcksvcgctrwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002750.6086626-743-226821350645898/AnsiballZ_dnf.py'
Oct 09 09:39:10 compute-2 sudo[23735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:10 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00057d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:11 compute-2 python3.9[23737]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:39:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e108 e108: 3 total, 3 up, 3 in
Oct 09 09:39:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108 pruub=15.604859352s) [1] async=[1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 40'1059 active pruub 200.039978027s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108 pruub=15.604738235s) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 200.039978027s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:39:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:11 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:39:11 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.b scrub starts
Oct 09 09:39:11 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.b scrub ok
Oct 09 09:39:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:11.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:11 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:11 compute-2 ceph-mon[5983]: 11.a scrub starts
Oct 09 09:39:11 compute-2 ceph-mon[5983]: 11.a scrub ok
Oct 09 09:39:11 compute-2 ceph-mon[5983]: pgmap v139: 337 pgs: 1 active+remapped, 336 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct 09 09:39:11 compute-2 ceph-mon[5983]: osdmap e107: 3 total, 3 up, 3 in
Oct 09 09:39:11 compute-2 ceph-mon[5983]: 7.b scrub starts
Oct 09 09:39:11 compute-2 ceph-mon[5983]: 7.b scrub ok
Oct 09 09:39:11 compute-2 ceph-mon[5983]: 8.4 deep-scrub starts
Oct 09 09:39:11 compute-2 ceph-mon[5983]: 8.4 deep-scrub ok
Oct 09 09:39:11 compute-2 ceph-mon[5983]: osdmap e108: 3 total, 3 up, 3 in
Oct 09 09:39:11 compute-2 ceph-mon[5983]: 9.b scrub starts
Oct 09 09:39:11 compute-2 ceph-mon[5983]: 9.b scrub ok
Oct 09 09:39:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:12 compute-2 sudo[23735]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:12 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e109 e109: 3 total, 3 up, 3 in
Oct 09 09:39:12 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:39:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:12.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:12 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Oct 09 09:39:12 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Oct 09 09:39:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:12 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 09 09:39:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:12 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:12 compute-2 sudo[23890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvkdvtjgfewmormiuktfjgeqqnygdbey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002752.1681652-768-147804627154368/AnsiballZ_file.py'
Oct 09 09:39:12 compute-2 sudo[23890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:12 compute-2 python3.9[23892]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:39:12 compute-2 sudo[23890]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:12 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00057d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:12 compute-2 sudo[24042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zigctmsduhukrturkjozieorqzzpfkaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002752.6728916-792-196934813750109/AnsiballZ_stat.py'
Oct 09 09:39:12 compute-2 sudo[24042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:13 compute-2 python3.9[24044]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:39:13 compute-2 ceph-mon[5983]: 12.8 scrub starts
Oct 09 09:39:13 compute-2 ceph-mon[5983]: 12.8 scrub ok
Oct 09 09:39:13 compute-2 ceph-mon[5983]: 11.1d scrub starts
Oct 09 09:39:13 compute-2 ceph-mon[5983]: 11.1d scrub ok
Oct 09 09:39:13 compute-2 ceph-mon[5983]: osdmap e109: 3 total, 3 up, 3 in
Oct 09 09:39:13 compute-2 ceph-mon[5983]: 9.8 scrub starts
Oct 09 09:39:13 compute-2 ceph-mon[5983]: 9.8 scrub ok
Oct 09 09:39:13 compute-2 sudo[24042]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:13 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e110 e110: 3 total, 3 up, 3 in
Oct 09 09:39:13 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110 pruub=14.995595932s) [1] async=[1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 40'1059 active pruub 201.442153931s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:13 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110 pruub=14.995530128s) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 201.442153931s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:39:13 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Oct 09 09:39:13 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Oct 09 09:39:13 compute-2 sudo[24120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsqrbqyywmtfanglplogohuxzgcakidh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002752.6728916-792-196934813750109/AnsiballZ_file.py'
Oct 09 09:39:13 compute-2 sudo[24120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:39:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:13.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:39:13 compute-2 python3.9[24122]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:39:13 compute-2 sudo[24120]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:13 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:13 compute-2 sudo[24273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcdinrstkswhimbzntxulknihfghhcpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002753.638738-831-65136958950493/AnsiballZ_stat.py'
Oct 09 09:39:13 compute-2 sudo[24273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:14 compute-2 python3.9[24275]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:39:14 compute-2 sudo[24273]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:14 compute-2 ceph-mon[5983]: pgmap v143: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.1 KiB/s rd, 2.2 KiB/s wr, 6 op/s; 30 B/s, 1 objects/s recovering
Oct 09 09:39:14 compute-2 ceph-mon[5983]: 7.2 scrub starts
Oct 09 09:39:14 compute-2 ceph-mon[5983]: 7.2 scrub ok
Oct 09 09:39:14 compute-2 ceph-mon[5983]: 3.10 scrub starts
Oct 09 09:39:14 compute-2 ceph-mon[5983]: 3.10 scrub ok
Oct 09 09:39:14 compute-2 ceph-mon[5983]: osdmap e110: 3 total, 3 up, 3 in
Oct 09 09:39:14 compute-2 ceph-mon[5983]: 9.17 scrub starts
Oct 09 09:39:14 compute-2 ceph-mon[5983]: 9.17 scrub ok
Oct 09 09:39:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e111 e111: 3 total, 3 up, 3 in
Oct 09 09:39:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:14.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:14 compute-2 sudo[24352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydlcskoklvdaymdabvpijfhukmyxtckd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002753.638738-831-65136958950493/AnsiballZ_file.py'
Oct 09 09:39:14 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.9 deep-scrub starts
Oct 09 09:39:14 compute-2 sudo[24352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:14 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.9 deep-scrub ok
Oct 09 09:39:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:14 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:14 compute-2 python3.9[24354]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:39:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e111 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:39:14 compute-2 sudo[24352]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:14 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:14 compute-2 sudo[24504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbmfusqiwbhgzxelmrzhamxbhocbiwqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002754.7594554-875-208585372937005/AnsiballZ_dnf.py'
Oct 09 09:39:14 compute-2 sudo[24504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:15 compute-2 ceph-mon[5983]: 3.6 deep-scrub starts
Oct 09 09:39:15 compute-2 ceph-mon[5983]: 3.6 deep-scrub ok
Oct 09 09:39:15 compute-2 ceph-mon[5983]: 5.f deep-scrub starts
Oct 09 09:39:15 compute-2 ceph-mon[5983]: 5.f deep-scrub ok
Oct 09 09:39:15 compute-2 ceph-mon[5983]: osdmap e111: 3 total, 3 up, 3 in
Oct 09 09:39:15 compute-2 ceph-mon[5983]: 8.9 deep-scrub starts
Oct 09 09:39:15 compute-2 ceph-mon[5983]: 8.9 deep-scrub ok
Oct 09 09:39:15 compute-2 python3.9[24506]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:39:15 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.4 scrub starts
Oct 09 09:39:15 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.4 scrub ok
Oct 09 09:39:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:15.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:15 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00057d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:16.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:16 compute-2 ceph-mon[5983]: pgmap v146: 337 pgs: 1 remapped+peering, 1 peering, 335 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:16 compute-2 ceph-mon[5983]: 12.c scrub starts
Oct 09 09:39:16 compute-2 ceph-mon[5983]: 12.c scrub ok
Oct 09 09:39:16 compute-2 ceph-mon[5983]: 5.10 scrub starts
Oct 09 09:39:16 compute-2 ceph-mon[5983]: 5.10 scrub ok
Oct 09 09:39:16 compute-2 ceph-mon[5983]: 12.4 scrub starts
Oct 09 09:39:16 compute-2 ceph-mon[5983]: 12.4 scrub ok
Oct 09 09:39:16 compute-2 sudo[24504]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:16 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Oct 09 09:39:16 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Oct 09 09:39:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:16 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c004290 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:16 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff98004ed0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:16 compute-2 python3.9[24659]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:39:17 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e112 e112: 3 total, 3 up, 3 in
Oct 09 09:39:17 compute-2 ceph-mon[5983]: 3.2 deep-scrub starts
Oct 09 09:39:17 compute-2 ceph-mon[5983]: 3.2 deep-scrub ok
Oct 09 09:39:17 compute-2 ceph-mon[5983]: 4.e scrub starts
Oct 09 09:39:17 compute-2 ceph-mon[5983]: 4.e scrub ok
Oct 09 09:39:17 compute-2 ceph-mon[5983]: 9.7 scrub starts
Oct 09 09:39:17 compute-2 ceph-mon[5983]: 9.7 scrub ok
Oct 09 09:39:17 compute-2 ceph-mon[5983]: pgmap v147: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 21 B/s, 1 objects/s recovering
Oct 09 09:39:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct 09 09:39:17 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Oct 09 09:39:17 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Oct 09 09:39:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:17.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:17 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:17 compute-2 python3.9[24811]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct 09 09:39:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:18 compute-2 python3.9[24962]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:39:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:18.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:18 compute-2 ceph-mon[5983]: 3.1 scrub starts
Oct 09 09:39:18 compute-2 ceph-mon[5983]: 3.1 scrub ok
Oct 09 09:39:18 compute-2 ceph-mon[5983]: 4.5 scrub starts
Oct 09 09:39:18 compute-2 ceph-mon[5983]: 4.5 scrub ok
Oct 09 09:39:18 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct 09 09:39:18 compute-2 ceph-mon[5983]: osdmap e112: 3 total, 3 up, 3 in
Oct 09 09:39:18 compute-2 ceph-mon[5983]: 4.15 scrub starts
Oct 09 09:39:18 compute-2 ceph-mon[5983]: 4.15 scrub ok
Oct 09 09:39:18 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.2 scrub starts
Oct 09 09:39:18 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.2 scrub ok
Oct 09 09:39:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:18 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:18 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c0042b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:18 compute-2 sudo[25113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vddvddgcztceywdivjzwtieuywjodcuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002758.4491465-999-70949262493559/AnsiballZ_systemd.py'
Oct 09 09:39:18 compute-2 sudo[25113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:19 compute-2 ceph-mon[5983]: 7.6 scrub starts
Oct 09 09:39:19 compute-2 ceph-mon[5983]: 7.6 scrub ok
Oct 09 09:39:19 compute-2 ceph-mon[5983]: 9.11 scrub starts
Oct 09 09:39:19 compute-2 ceph-mon[5983]: 9.11 scrub ok
Oct 09 09:39:19 compute-2 ceph-mon[5983]: 12.2 scrub starts
Oct 09 09:39:19 compute-2 ceph-mon[5983]: 12.2 scrub ok
Oct 09 09:39:19 compute-2 ceph-mon[5983]: pgmap v149: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Oct 09 09:39:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct 09 09:39:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e113 e113: 3 total, 3 up, 3 in
Oct 09 09:39:19 compute-2 python3.9[25115]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:39:19 compute-2 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct 09 09:39:19 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Oct 09 09:39:19 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Oct 09 09:39:19 compute-2 systemd[1]: tuned.service: Deactivated successfully.
Oct 09 09:39:19 compute-2 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct 09 09:39:19 compute-2 systemd[1]: tuned.service: Consumed 271ms CPU time, 19.1M memory peak, read 4.0M from disk, written 16.0K to disk.
Oct 09 09:39:19 compute-2 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct 09 09:39:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:39:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:19.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:39:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct 09 09:39:19 compute-2 systemd[1]: Started Dynamic System Tuning Daemon.
Oct 09 09:39:19 compute-2 sudo[25113]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:19 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff98004ed0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:20 compute-2 python3.9[25278]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct 09 09:39:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:20.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:20 compute-2 ceph-mon[5983]: 12.b scrub starts
Oct 09 09:39:20 compute-2 ceph-mon[5983]: 12.b scrub ok
Oct 09 09:39:20 compute-2 ceph-mon[5983]: 5.11 scrub starts
Oct 09 09:39:20 compute-2 ceph-mon[5983]: 5.11 scrub ok
Oct 09 09:39:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct 09 09:39:20 compute-2 ceph-mon[5983]: osdmap e113: 3 total, 3 up, 3 in
Oct 09 09:39:20 compute-2 ceph-mon[5983]: 4.2 scrub starts
Oct 09 09:39:20 compute-2 ceph-mon[5983]: 4.2 scrub ok
Oct 09 09:39:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:39:20 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Oct 09 09:39:20 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Oct 09 09:39:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:20 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:20 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:21 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e114 e114: 3 total, 3 up, 3 in
Oct 09 09:39:21 compute-2 ceph-mon[5983]: 12.e deep-scrub starts
Oct 09 09:39:21 compute-2 ceph-mon[5983]: 12.e deep-scrub ok
Oct 09 09:39:21 compute-2 ceph-mon[5983]: 11.1e deep-scrub starts
Oct 09 09:39:21 compute-2 ceph-mon[5983]: 11.1e deep-scrub ok
Oct 09 09:39:21 compute-2 ceph-mon[5983]: 9.18 scrub starts
Oct 09 09:39:21 compute-2 ceph-mon[5983]: 9.18 scrub ok
Oct 09 09:39:21 compute-2 ceph-mon[5983]: pgmap v151: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 17 B/s, 1 objects/s recovering
Oct 09 09:39:21 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct 09 09:39:21 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1d scrub starts
Oct 09 09:39:21 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1d scrub ok
Oct 09 09:39:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:39:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:21.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:39:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:21 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:22.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:22 compute-2 ceph-mon[5983]: 12.6 scrub starts
Oct 09 09:39:22 compute-2 ceph-mon[5983]: 12.6 scrub ok
Oct 09 09:39:22 compute-2 ceph-mon[5983]: 9.12 scrub starts
Oct 09 09:39:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct 09 09:39:22 compute-2 ceph-mon[5983]: osdmap e114: 3 total, 3 up, 3 in
Oct 09 09:39:22 compute-2 ceph-mon[5983]: 9.12 scrub ok
Oct 09 09:39:22 compute-2 ceph-mon[5983]: 12.1d scrub starts
Oct 09 09:39:22 compute-2 ceph-mon[5983]: 12.1d scrub ok
Oct 09 09:39:22 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.c scrub starts
Oct 09 09:39:22 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.c scrub ok
Oct 09 09:39:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:22 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff98004ed0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:22 compute-2 sudo[25431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afnsfgultjbkngkjxjtqdwijgqkkgcam ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002762.3201125-1170-221562668626230/AnsiballZ_systemd.py'
Oct 09 09:39:22 compute-2 sudo[25431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:22 compute-2 python3.9[25433]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:39:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:22 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:22 compute-2 sudo[25431]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:23 compute-2 ceph-mon[5983]: 5.a deep-scrub starts
Oct 09 09:39:23 compute-2 ceph-mon[5983]: 5.a deep-scrub ok
Oct 09 09:39:23 compute-2 ceph-mon[5983]: 8.12 deep-scrub starts
Oct 09 09:39:23 compute-2 ceph-mon[5983]: 8.12 deep-scrub ok
Oct 09 09:39:23 compute-2 ceph-mon[5983]: 8.c scrub starts
Oct 09 09:39:23 compute-2 ceph-mon[5983]: 8.c scrub ok
Oct 09 09:39:23 compute-2 ceph-mon[5983]: pgmap v153: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:23 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct 09 09:39:23 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e115 e115: 3 total, 3 up, 3 in
Oct 09 09:39:23 compute-2 sudo[25585]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkimwuimcahxekccehvjbwfoljspxsev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002763.003724-1170-54963368005821/AnsiballZ_systemd.py'
Oct 09 09:39:23 compute-2 sudo[25585]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:23 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Oct 09 09:39:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:23.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:23 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Oct 09 09:39:23 compute-2 python3.9[25587]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:39:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:23 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:23 compute-2 sudo[25585]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:23 compute-2 sshd-session[19384]: Connection closed by 192.168.122.30 port 51912
Oct 09 09:39:23 compute-2 sshd-session[19381]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:39:23 compute-2 systemd[1]: session-22.scope: Deactivated successfully.
Oct 09 09:39:23 compute-2 systemd[1]: session-22.scope: Consumed 49.584s CPU time.
Oct 09 09:39:23 compute-2 systemd-logind[800]: Session 22 logged out. Waiting for processes to exit.
Oct 09 09:39:23 compute-2 systemd-logind[800]: Removed session 22.
Oct 09 09:39:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:24.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:24 compute-2 ceph-mon[5983]: 7.8 scrub starts
Oct 09 09:39:24 compute-2 ceph-mon[5983]: 7.8 scrub ok
Oct 09 09:39:24 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct 09 09:39:24 compute-2 ceph-mon[5983]: osdmap e115: 3 total, 3 up, 3 in
Oct 09 09:39:24 compute-2 ceph-mon[5983]: 11.14 scrub starts
Oct 09 09:39:24 compute-2 ceph-mon[5983]: 11.14 scrub ok
Oct 09 09:39:24 compute-2 ceph-mon[5983]: 8.2 scrub starts
Oct 09 09:39:24 compute-2 ceph-mon[5983]: 8.2 scrub ok
Oct 09 09:39:24 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Oct 09 09:39:24 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Oct 09 09:39:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:24 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e115 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:39:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:24 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:25 compute-2 ceph-mon[5983]: 7.13 scrub starts
Oct 09 09:39:25 compute-2 ceph-mon[5983]: 7.13 scrub ok
Oct 09 09:39:25 compute-2 ceph-mon[5983]: 8.18 scrub starts
Oct 09 09:39:25 compute-2 ceph-mon[5983]: 8.18 scrub ok
Oct 09 09:39:25 compute-2 ceph-mon[5983]: 9.3 scrub starts
Oct 09 09:39:25 compute-2 ceph-mon[5983]: pgmap v155: 337 pgs: 337 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:25 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct 09 09:39:25 compute-2 ceph-mon[5983]: 9.3 scrub ok
Oct 09 09:39:25 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e116 e116: 3 total, 3 up, 3 in
Oct 09 09:39:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:39:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:25.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:39:25 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Oct 09 09:39:25 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Oct 09 09:39:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:25 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:26 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e117 e117: 3 total, 3 up, 3 in
Oct 09 09:39:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:26.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:26 compute-2 ceph-mon[5983]: 5.c scrub starts
Oct 09 09:39:26 compute-2 ceph-mon[5983]: 5.c scrub ok
Oct 09 09:39:26 compute-2 ceph-mon[5983]: 9.6 scrub starts
Oct 09 09:39:26 compute-2 ceph-mon[5983]: 9.6 scrub ok
Oct 09 09:39:26 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct 09 09:39:26 compute-2 ceph-mon[5983]: osdmap e116: 3 total, 3 up, 3 in
Oct 09 09:39:26 compute-2 ceph-mon[5983]: 9.9 scrub starts
Oct 09 09:39:26 compute-2 ceph-mon[5983]: 9.9 scrub ok
Oct 09 09:39:26 compute-2 ceph-mon[5983]: osdmap e117: 3 total, 3 up, 3 in
Oct 09 09:39:26 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Oct 09 09:39:26 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Oct 09 09:39:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:26 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:26 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c004350 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:27 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e118 e118: 3 total, 3 up, 3 in
Oct 09 09:39:27 compute-2 ceph-mon[5983]: 6.0 scrub starts
Oct 09 09:39:27 compute-2 ceph-mon[5983]: 6.0 scrub ok
Oct 09 09:39:27 compute-2 ceph-mon[5983]: 8.17 scrub starts
Oct 09 09:39:27 compute-2 ceph-mon[5983]: 8.17 scrub ok
Oct 09 09:39:27 compute-2 ceph-mon[5983]: pgmap v158: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:27 compute-2 ceph-mon[5983]: 8.3 scrub starts
Oct 09 09:39:27 compute-2 ceph-mon[5983]: 8.3 scrub ok
Oct 09 09:39:27 compute-2 ceph-mon[5983]: osdmap e118: 3 total, 3 up, 3 in
Oct 09 09:39:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:27.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:27 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.7 scrub starts
Oct 09 09:39:27 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.7 scrub ok
Oct 09 09:39:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:27 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:28 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e119 e119: 3 total, 3 up, 3 in
Oct 09 09:39:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:28.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:28 compute-2 ceph-mon[5983]: 6.6 scrub starts
Oct 09 09:39:28 compute-2 ceph-mon[5983]: 6.6 scrub ok
Oct 09 09:39:28 compute-2 ceph-mon[5983]: 12.7 scrub starts
Oct 09 09:39:28 compute-2 ceph-mon[5983]: 12.7 scrub ok
Oct 09 09:39:28 compute-2 ceph-mon[5983]: osdmap e119: 3 total, 3 up, 3 in
Oct 09 09:39:28 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct 09 09:39:28 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct 09 09:39:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:28 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:28 compute-2 sudo[25621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:39:28 compute-2 sudo[25621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:39:28 compute-2 sudo[25621]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:28 compute-2 sshd-session[25646]: Accepted publickey for zuul from 192.168.122.30 port 46736 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:39:28 compute-2 systemd-logind[800]: New session 23 of user zuul.
Oct 09 09:39:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:28 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c0022a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:28 compute-2 systemd[1]: Started Session 23 of User zuul.
Oct 09 09:39:28 compute-2 sshd-session[25646]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:39:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e120 e120: 3 total, 3 up, 3 in
Oct 09 09:39:29 compute-2 ceph-mon[5983]: 6.4 scrub starts
Oct 09 09:39:29 compute-2 ceph-mon[5983]: 6.4 scrub ok
Oct 09 09:39:29 compute-2 ceph-mon[5983]: pgmap v161: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:29 compute-2 ceph-mon[5983]: 4.3 scrub starts
Oct 09 09:39:29 compute-2 ceph-mon[5983]: 4.3 scrub ok
Oct 09 09:39:29 compute-2 ceph-mon[5983]: osdmap e120: 3 total, 3 up, 3 in
Oct 09 09:39:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:39:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:29.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:39:29 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.d deep-scrub starts
Oct 09 09:39:29 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.d deep-scrub ok
Oct 09 09:39:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e120 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:39:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:29 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c0043e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:29 compute-2 python3.9[25799]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:39:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:30.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:30 compute-2 ceph-mon[5983]: 6.b deep-scrub starts
Oct 09 09:39:30 compute-2 ceph-mon[5983]: 6.b deep-scrub ok
Oct 09 09:39:30 compute-2 ceph-mon[5983]: 8.d deep-scrub starts
Oct 09 09:39:30 compute-2 ceph-mon[5983]: 8.d deep-scrub ok
Oct 09 09:39:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:30 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:39:30 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.1f deep-scrub starts
Oct 09 09:39:30 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.1f deep-scrub ok
Oct 09 09:39:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:30 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:30 compute-2 sudo[25955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-josievwndpyeaezjqqpbmllcgpmgmpjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002770.142821-70-25895961446349/AnsiballZ_getent.py'
Oct 09 09:39:30 compute-2 sudo[25955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:30 compute-2 python3.9[25957]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct 09 09:39:30 compute-2 sudo[25955]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:30 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:31 compute-2 sudo[26108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmxdoqzmqqgapjbclpyhlulxkrmfsirb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002770.8846343-106-132276524015588/AnsiballZ_setup.py'
Oct 09 09:39:31 compute-2 sudo[26108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:31 compute-2 ceph-mon[5983]: 6.9 scrub starts
Oct 09 09:39:31 compute-2 ceph-mon[5983]: 6.9 scrub ok
Oct 09 09:39:31 compute-2 ceph-mon[5983]: pgmap v163: 337 pgs: 1 unknown, 336 active+clean; 457 KiB data, 152 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:31 compute-2 ceph-mon[5983]: 8.1f deep-scrub starts
Oct 09 09:39:31 compute-2 ceph-mon[5983]: 8.1f deep-scrub ok
Oct 09 09:39:31 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct 09 09:39:31 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct 09 09:39:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:39:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:31.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:39:31 compute-2 python3.9[26110]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:39:31 compute-2 sudo[26108]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:31 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c0022a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:31 compute-2 sudo[26193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqiammapnzvpgmfjqzjesrptktohemze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002770.8846343-106-132276524015588/AnsiballZ_dnf.py'
Oct 09 09:39:31 compute-2 sudo[26193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:31 compute-2 python3.9[26195]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 09 09:39:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:39:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:32.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:39:32 compute-2 ceph-mon[5983]: 6.c deep-scrub starts
Oct 09 09:39:32 compute-2 ceph-mon[5983]: 6.c deep-scrub ok
Oct 09 09:39:32 compute-2 ceph-mon[5983]: 8.16 scrub starts
Oct 09 09:39:32 compute-2 ceph-mon[5983]: 8.16 scrub ok
Oct 09 09:39:32 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Oct 09 09:39:32 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Oct 09 09:39:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:32 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c004400 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:32 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:32 compute-2 sudo[26193]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:33 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Oct 09 09:39:33 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Oct 09 09:39:33 compute-2 ceph-mon[5983]: 10.15 scrub starts
Oct 09 09:39:33 compute-2 ceph-mon[5983]: 10.15 scrub ok
Oct 09 09:39:33 compute-2 ceph-mon[5983]: 8.1b scrub starts
Oct 09 09:39:33 compute-2 ceph-mon[5983]: 8.1b scrub ok
Oct 09 09:39:33 compute-2 ceph-mon[5983]: 11.3 scrub starts
Oct 09 09:39:33 compute-2 ceph-mon[5983]: 11.3 scrub ok
Oct 09 09:39:33 compute-2 ceph-mon[5983]: pgmap v164: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 7.0 KiB/s rd, 0 B/s wr, 9 op/s; 54 B/s, 2 objects/s recovering
Oct 09 09:39:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct 09 09:39:33 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e121 e121: 3 total, 3 up, 3 in
Oct 09 09:39:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:33 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:39:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:33 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:39:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:33.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:33 compute-2 sudo[26347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxxjlufmigdylkmcnkrdvrcztaeravtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002773.17204-148-150437558873730/AnsiballZ_dnf.py'
Oct 09 09:39:33 compute-2 sudo[26347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:33 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:33 compute-2 python3.9[26349]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:39:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:34.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:34 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1f deep-scrub starts
Oct 09 09:39:34 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1f deep-scrub ok
Oct 09 09:39:34 compute-2 ceph-mon[5983]: 10.18 scrub starts
Oct 09 09:39:34 compute-2 ceph-mon[5983]: 10.18 scrub ok
Oct 09 09:39:34 compute-2 ceph-mon[5983]: 8.10 scrub starts
Oct 09 09:39:34 compute-2 ceph-mon[5983]: 8.10 scrub ok
Oct 09 09:39:34 compute-2 ceph-mon[5983]: 8.6 scrub starts
Oct 09 09:39:34 compute-2 ceph-mon[5983]: 8.6 scrub ok
Oct 09 09:39:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct 09 09:39:34 compute-2 ceph-mon[5983]: osdmap e121: 3 total, 3 up, 3 in
Oct 09 09:39:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:34 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c0022a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:39:34 compute-2 sudo[26347]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:34 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c004420 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:35 compute-2 sudo[26502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvbaxyabrnkuxdtkujbrcrhaznqydpdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002774.6705582-172-204786896957270/AnsiballZ_systemd.py'
Oct 09 09:39:35 compute-2 sudo[26502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:35 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.f scrub starts
Oct 09 09:39:35 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.f scrub ok
Oct 09 09:39:35 compute-2 ceph-mon[5983]: 10.19 scrub starts
Oct 09 09:39:35 compute-2 ceph-mon[5983]: 10.19 scrub ok
Oct 09 09:39:35 compute-2 ceph-mon[5983]: 9.f scrub starts
Oct 09 09:39:35 compute-2 ceph-mon[5983]: 9.f scrub ok
Oct 09 09:39:35 compute-2 ceph-mon[5983]: 10.1f deep-scrub starts
Oct 09 09:39:35 compute-2 ceph-mon[5983]: 10.1f deep-scrub ok
Oct 09 09:39:35 compute-2 ceph-mon[5983]: pgmap v166: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 0 B/s wr, 9 op/s; 53 B/s, 2 objects/s recovering
Oct 09 09:39:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct 09 09:39:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:39:35 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e122 e122: 3 total, 3 up, 3 in
Oct 09 09:39:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:35.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:35 compute-2 python3.9[26504]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 09:39:35 compute-2 sudo[26502]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:35 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:36 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e123 e123: 3 total, 3 up, 3 in
Oct 09 09:39:36 compute-2 python3.9[26658]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:39:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:36.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:36 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Oct 09 09:39:36 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Oct 09 09:39:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:36 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 09 09:39:36 compute-2 ceph-mon[5983]: 10.8 scrub starts
Oct 09 09:39:36 compute-2 ceph-mon[5983]: 10.8 scrub ok
Oct 09 09:39:36 compute-2 ceph-mon[5983]: 6.e scrub starts
Oct 09 09:39:36 compute-2 ceph-mon[5983]: 6.e scrub ok
Oct 09 09:39:36 compute-2 ceph-mon[5983]: 10.f scrub starts
Oct 09 09:39:36 compute-2 ceph-mon[5983]: 10.f scrub ok
Oct 09 09:39:36 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct 09 09:39:36 compute-2 ceph-mon[5983]: osdmap e122: 3 total, 3 up, 3 in
Oct 09 09:39:36 compute-2 ceph-mon[5983]: osdmap e123: 3 total, 3 up, 3 in
Oct 09 09:39:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:36 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:36 compute-2 sudo[26809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npmhgybczzenanajxucffrnturfwiayz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002776.3357842-227-182281708147018/AnsiballZ_sefcontext.py'
Oct 09 09:39:36 compute-2 sudo[26809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:36 compute-2 python3.9[26811]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct 09 09:39:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:36 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:36 compute-2 sudo[26809]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:37 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e124 e124: 3 total, 3 up, 3 in
Oct 09 09:39:37 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Oct 09 09:39:37 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Oct 09 09:39:37 compute-2 ceph-mon[5983]: 10.5 scrub starts
Oct 09 09:39:37 compute-2 ceph-mon[5983]: 10.5 scrub ok
Oct 09 09:39:37 compute-2 ceph-mon[5983]: 6.5 scrub starts
Oct 09 09:39:37 compute-2 ceph-mon[5983]: 6.5 scrub ok
Oct 09 09:39:37 compute-2 ceph-mon[5983]: 10.4 scrub starts
Oct 09 09:39:37 compute-2 ceph-mon[5983]: 10.4 scrub ok
Oct 09 09:39:37 compute-2 ceph-mon[5983]: pgmap v169: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct 09 09:39:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct 09 09:39:37 compute-2 ceph-mon[5983]: osdmap e124: 3 total, 3 up, 3 in
Oct 09 09:39:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:37.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:37 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:37 compute-2 python3.9[26961]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:39:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:38.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:38 compute-2 sudo[27119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnwqgwfyeqzejordzgbrdoazntxnhhia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002778.0221589-280-94967237442707/AnsiballZ_dnf.py'
Oct 09 09:39:38 compute-2 sudo[27119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:38 compute-2 ceph-mon[5983]: 6.2 scrub starts
Oct 09 09:39:38 compute-2 ceph-mon[5983]: 6.2 scrub ok
Oct 09 09:39:38 compute-2 ceph-mon[5983]: 10.1 scrub starts
Oct 09 09:39:38 compute-2 ceph-mon[5983]: 10.1 scrub ok
Oct 09 09:39:38 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e125 e125: 3 total, 3 up, 3 in
Oct 09 09:39:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:38 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:38 compute-2 python3.9[27121]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:39:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:38 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e126 e126: 3 total, 3 up, 3 in
Oct 09 09:39:39 compute-2 ceph-mon[5983]: 6.a scrub starts
Oct 09 09:39:39 compute-2 ceph-mon[5983]: 6.a scrub ok
Oct 09 09:39:39 compute-2 ceph-mon[5983]: osdmap e125: 3 total, 3 up, 3 in
Oct 09 09:39:39 compute-2 ceph-mon[5983]: pgmap v172: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct 09 09:39:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:39.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:39 compute-2 sudo[27119]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:39:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:39 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:39 compute-2 sudo[27273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtooddaukaozzxctniidmfjdzjkveipg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002779.451002-304-240781815617396/AnsiballZ_command.py'
Oct 09 09:39:39 compute-2 sudo[27273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:39 compute-2 python3.9[27275]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:39:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:40.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:40 compute-2 ceph-mon[5983]: 6.3 scrub starts
Oct 09 09:39:40 compute-2 ceph-mon[5983]: 6.3 scrub ok
Oct 09 09:39:40 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct 09 09:39:40 compute-2 ceph-mon[5983]: osdmap e126: 3 total, 3 up, 3 in
Oct 09 09:39:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:40 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:40 compute-2 sudo[27273]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:40 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c004460 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:40 compute-2 sudo[27561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqxebzfmibnzrsdfjpwpufnbnuxrccbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002780.6348996-328-209681573128640/AnsiballZ_file.py'
Oct 09 09:39:40 compute-2 sudo[27561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:41 compute-2 python3.9[27563]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 09 09:39:41 compute-2 sudo[27561]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:41 compute-2 ceph-mon[5983]: 10.1b deep-scrub starts
Oct 09 09:39:41 compute-2 ceph-mon[5983]: 10.1b deep-scrub ok
Oct 09 09:39:41 compute-2 ceph-mon[5983]: 10.1a scrub starts
Oct 09 09:39:41 compute-2 ceph-mon[5983]: 10.1a scrub ok
Oct 09 09:39:41 compute-2 ceph-mon[5983]: pgmap v174: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:41 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct 09 09:39:41 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e127 e127: 3 total, 3 up, 3 in
Oct 09 09:39:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:41.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:41 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c0022a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:41 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:39:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:41 compute-2 python3.9[27714]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:39:42 compute-2 sudo[27866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mukwzyvngxvgfenbhvtvbxemlpguwufz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002781.8968542-376-238610026331121/AnsiballZ_dnf.py'
Oct 09 09:39:42 compute-2 sudo[27866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:42.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093942 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 09 09:39:42 compute-2 python3.9[27868]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:39:42 compute-2 ceph-mon[5983]: 10.1d scrub starts
Oct 09 09:39:42 compute-2 ceph-mon[5983]: 10.1d scrub ok
Oct 09 09:39:42 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct 09 09:39:42 compute-2 ceph-mon[5983]: osdmap e127: 3 total, 3 up, 3 in
Oct 09 09:39:42 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e128 e128: 3 total, 3 up, 3 in
Oct 09 09:39:42 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:42 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 09:39:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:42 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:42 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:43 compute-2 sudo[27866]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:43 compute-2 ceph-mon[5983]: 10.9 scrub starts
Oct 09 09:39:43 compute-2 ceph-mon[5983]: 10.9 scrub ok
Oct 09 09:39:43 compute-2 ceph-mon[5983]: pgmap v176: 337 pgs: 337 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:43 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct 09 09:39:43 compute-2 ceph-mon[5983]: osdmap e128: 3 total, 3 up, 3 in
Oct 09 09:39:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:43.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:43 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e129 e129: 3 total, 3 up, 3 in
Oct 09 09:39:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:43 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:43 compute-2 sudo[28021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbebfptkxeacoivztnexaqgjtzapbddz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002783.4345107-403-237128950847610/AnsiballZ_dnf.py'
Oct 09 09:39:43 compute-2 sudo[28021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:43 compute-2 python3.9[28023]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:39:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:39:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:44.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:39:44 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336996078s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 active pruub 227.930297852s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:44 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:39:44 compute-2 ceph-mon[5983]: 10.c scrub starts
Oct 09 09:39:44 compute-2 ceph-mon[5983]: 10.c scrub ok
Oct 09 09:39:44 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct 09 09:39:44 compute-2 ceph-mon[5983]: osdmap e129: 3 total, 3 up, 3 in
Oct 09 09:39:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e130 e130: 3 total, 3 up, 3 in
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #19. Immutable memtables: 0.
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.335654) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 19
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784335681, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 3413, "num_deletes": 251, "total_data_size": 7306794, "memory_usage": 7417192, "flush_reason": "Manual Compaction"}
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #20: started
Oct 09 09:39:44 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:44 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 09:39:44 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:44 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784347904, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 20, "file_size": 4787675, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7214, "largest_seqno": 10622, "table_properties": {"data_size": 4771843, "index_size": 10214, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4549, "raw_key_size": 42141, "raw_average_key_size": 23, "raw_value_size": 4736781, "raw_average_value_size": 2625, "num_data_blocks": 444, "num_entries": 1804, "num_filter_entries": 1804, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002664, "oldest_key_time": 1760002664, "file_creation_time": 1760002784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 12331 microseconds, and 8654 cpu microseconds.
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.347985) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #20: 4787675 bytes OK
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.348023) [db/memtable_list.cc:519] [default] Level-0 commit table #20 started
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.348355) [db/memtable_list.cc:722] [default] Level-0 commit table #20: memtable #1 done
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.348366) EVENT_LOG_v1 {"time_micros": 1760002784348363, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.348381) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 7289936, prev total WAL file size 7289936, number of live WAL files 2.
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000016.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.349669) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [20(4675KB)], [18(12MB)]
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784349732, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [20], "files_L6": [18], "score": -1, "input_data_size": 18169717, "oldest_snapshot_seqno": -1}
Oct 09 09:39:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:44 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff8c0022a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #21: 3978 keys, 14451597 bytes, temperature: kUnknown
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784386605, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 21, "file_size": 14451597, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14418880, "index_size": 21663, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9989, "raw_key_size": 101461, "raw_average_key_size": 25, "raw_value_size": 14339929, "raw_average_value_size": 3604, "num_data_blocks": 936, "num_entries": 3978, "num_filter_entries": 3978, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760002784, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 21, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.387046) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14451597 bytes
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.387437) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 488.5 rd, 388.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.6, 12.8 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(6.8) write-amplify(3.0) OK, records in: 4502, records dropped: 524 output_compression: NoCompression
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.387450) EVENT_LOG_v1 {"time_micros": 1760002784387444, "job": 8, "event": "compaction_finished", "compaction_time_micros": 37192, "compaction_time_cpu_micros": 25510, "output_level": 6, "num_output_files": 1, "total_output_size": 14451597, "num_input_records": 4502, "num_output_records": 3978, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784388229, "job": 8, "event": "table_file_deletion", "file_number": 20}
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000018.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002784389864, "job": 8, "event": "table_file_deletion", "file_number": 18}
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.349610) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.389910) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.389915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.389916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.389917) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:39:44 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:39:44.389918) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:39:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:39:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:44 compute-2 sudo[28021]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:44 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:45.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:45 compute-2 ceph-mon[5983]: 10.6 deep-scrub starts
Oct 09 09:39:45 compute-2 ceph-mon[5983]: 10.6 deep-scrub ok
Oct 09 09:39:45 compute-2 ceph-mon[5983]: pgmap v179: 337 pgs: 337 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:45 compute-2 ceph-mon[5983]: osdmap e130: 3 total, 3 up, 3 in
Oct 09 09:39:45 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e131 e131: 3 total, 3 up, 3 in
Oct 09 09:39:45 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:39:45 compute-2 sudo[28175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndvgxnymqexbodewsxdjeyljpbpojdrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002785.1418056-440-119422377978767/AnsiballZ_stat.py'
Oct 09 09:39:45 compute-2 sudo[28175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:45 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa00064e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:45 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 09:39:45 compute-2 python3.9[28177]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:39:45 compute-2 sudo[28175]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:46 compute-2 sudo[28330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eblkjqlfstpnhvpdlfkoemvfgoomxkan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002785.7140703-463-243712036668735/AnsiballZ_slurp.py'
Oct 09 09:39:46 compute-2 sudo[28330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:39:46 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e132 e132: 3 total, 3 up, 3 in
Oct 09 09:39:46 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462431908s) [0] async=[0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 40'1059 active pruub 234.905334473s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 09:39:46 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 09:39:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:46.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:46 compute-2 python3.9[28332]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct 09 09:39:46 compute-2 sudo[28330]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:46 compute-2 ceph-mon[5983]: 10.a scrub starts
Oct 09 09:39:46 compute-2 ceph-mon[5983]: 10.a scrub ok
Oct 09 09:39:46 compute-2 ceph-mon[5983]: osdmap e131: 3 total, 3 up, 3 in
Oct 09 09:39:46 compute-2 ceph-mon[5983]: osdmap e132: 3 total, 3 up, 3 in
Oct 09 09:39:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:46 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c0044c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:46 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff7c0044c0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:47 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 e133: 3 total, 3 up, 3 in
Oct 09 09:39:47 compute-2 sshd-session[25649]: Connection closed by 192.168.122.30 port 46736
Oct 09 09:39:47 compute-2 sshd-session[25646]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:39:47 compute-2 systemd[1]: session-23.scope: Deactivated successfully.
Oct 09 09:39:47 compute-2 systemd[1]: session-23.scope: Consumed 13.314s CPU time.
Oct 09 09:39:47 compute-2 systemd-logind[800]: Session 23 logged out. Waiting for processes to exit.
Oct 09 09:39:47 compute-2 systemd-logind[800]: Removed session 23.
Oct 09 09:39:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:47.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:47 compute-2 ceph-mon[5983]: 10.0 scrub starts
Oct 09 09:39:47 compute-2 ceph-mon[5983]: 10.0 scrub ok
Oct 09 09:39:47 compute-2 ceph-mon[5983]: pgmap v183: 337 pgs: 1 activating+remapped, 336 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 5/222 objects misplaced (2.252%)
Oct 09 09:39:47 compute-2 ceph-mon[5983]: osdmap e133: 3 total, 3 up, 3 in
Oct 09 09:39:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:47 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:39:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:48.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:39:48 compute-2 ceph-mon[5983]: 10.d scrub starts
Oct 09 09:39:48 compute-2 ceph-mon[5983]: 10.d scrub ok
Oct 09 09:39:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:48 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:48 compute-2 sudo[28362]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:39:48 compute-2 sudo[28362]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:39:48 compute-2 sudo[28362]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:48 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:49.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:49 compute-2 ceph-mon[5983]: 10.b scrub starts
Oct 09 09:39:49 compute-2 ceph-mon[5983]: 10.b scrub ok
Oct 09 09:39:49 compute-2 ceph-mon[5983]: pgmap v185: 337 pgs: 1 activating+remapped, 336 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 5/222 objects misplaced (2.252%)
Oct 09 09:39:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:39:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:49 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa4002600 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:50 compute-2 sudo[28388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:39:50 compute-2 sudo[28388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:39:50 compute-2 sudo[28388]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:50 compute-2 sudo[28413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:39:50 compute-2 sudo[28413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:39:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:50.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:39:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:50 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effac03b420 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:50 compute-2 sudo[28413]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:50 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:51.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:51 compute-2 ceph-mon[5983]: pgmap v186: 337 pgs: 1 activating+remapped, 336 active+clean; 457 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 5/222 objects misplaced (2.252%)
Oct 09 09:39:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:51 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:52 compute-2 sshd-session[28469]: Accepted publickey for zuul from 192.168.122.30 port 32974 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:39:52 compute-2 systemd-logind[800]: New session 24 of user zuul.
Oct 09 09:39:52 compute-2 systemd[1]: Started Session 24 of User zuul.
Oct 09 09:39:52 compute-2 sshd-session[28469]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:39:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:39:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:52.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:39:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/093952 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:39:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:52 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa4003140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:52 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa4003140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:52 compute-2 python3.9[28623]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:39:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:39:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:39:53 compute-2 ceph-mon[5983]: pgmap v187: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:39:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:39:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:39:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:39:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:39:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:39:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:39:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:53.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:53 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:53 compute-2 python3.9[28777]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:39:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:54.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:54 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:39:54 compute-2 python3.9[28972]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:39:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:54 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa4003140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:55 compute-2 sshd-session[28472]: Connection closed by 192.168.122.30 port 32974
Oct 09 09:39:55 compute-2 sshd-session[28469]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:39:55 compute-2 systemd[1]: session-24.scope: Deactivated successfully.
Oct 09 09:39:55 compute-2 systemd[1]: session-24.scope: Consumed 1.782s CPU time.
Oct 09 09:39:55 compute-2 systemd-logind[800]: Session 24 logged out. Waiting for processes to exit.
Oct 09 09:39:55 compute-2 systemd-logind[800]: Removed session 24.
Oct 09 09:39:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:55.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:55 compute-2 ceph-mon[5983]: pgmap v188: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:55 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effac03bd40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:56 compute-2 sudo[28999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:39:56 compute-2 sudo[28999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:39:56 compute-2 sudo[28999]: pam_unix(sudo:session): session closed for user root
Oct 09 09:39:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:56.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:56 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:39:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:39:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:56 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:57.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:57 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effa4003140 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:57 compute-2 ceph-mon[5983]: pgmap v189: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:39:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:39:58.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:58 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effac03bd40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:58 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:39:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:39:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:39:59.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:39:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:39:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:39:59 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:39:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:39:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:39:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:39:59 compute-2 ceph-mon[5983]: pgmap v190: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:40:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:40:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:00.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:40:00 compute-2 systemd[1]: Starting system activity accounting tool...
Oct 09 09:40:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:40:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:40:00 compute-2 sshd-session[29029]: Accepted publickey for zuul from 192.168.122.30 port 49846 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:40:00 compute-2 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct 09 09:40:00 compute-2 systemd[1]: Finished system activity accounting tool.
Oct 09 09:40:00 compute-2 systemd-logind[800]: New session 25 of user zuul.
Oct 09 09:40:00 compute-2 systemd[1]: Started Session 25 of User zuul.
Oct 09 09:40:00 compute-2 sshd-session[29029]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:40:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:40:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7eff88005100 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:40:00 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7effac03bd40 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:00 compute-2 ceph-mon[5983]: overall HEALTH_OK
Oct 09 09:40:01 compute-2 python3.9[29184]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:40:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:40:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:01.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:40:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:40:01 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:01 compute-2 python3.9[29339]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:40:01 compute-2 ceph-mon[5983]: pgmap v191: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:40:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:02.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:02 compute-2 sudo[29494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pslpvqiavorbnsodiaknrkaehtioylzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002802.0931497-82-63549837836817/AnsiballZ_setup.py'
Oct 09 09:40:02 compute-2 sudo[29494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:40:02 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:02 compute-2 python3.9[29496]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:40:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:02 compute-2 sudo[29494]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[21001]: 09/10/2025 09:40:02 : epoch 68e78278 : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x55763d50ddf0 fd 49 proxy ignored for local
Oct 09 09:40:02 compute-2 kernel: ganesha.nfsd[21367]: segfault at 50 ip 00007f0039af032e sp 00007efffdffa210 error 4 in libntirpc.so.5.8[7f0039ad5000+2c000] likely on CPU 0 (core 0, socket 0)
Oct 09 09:40:02 compute-2 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 09 09:40:02 compute-2 systemd[1]: Started Process Core Dump (PID 29509/UID 0).
Oct 09 09:40:03 compute-2 sudo[29580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjdsiirqjxmxedvojrqsnzgewldgpqcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002802.0931497-82-63549837836817/AnsiballZ_dnf.py'
Oct 09 09:40:03 compute-2 sudo[29580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:03 compute-2 python3.9[29582]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:40:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:03.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:03 compute-2 ceph-mon[5983]: pgmap v192: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail
Oct 09 09:40:04 compute-2 systemd-coredump[29523]: Process 21005 (ganesha.nfsd) of user 0 dumped core.
                                                   
                                                   Stack trace of thread 47:
                                                   #0  0x00007f0039af032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                   ELF object binary architecture: AMD x86-64
Oct 09 09:40:04 compute-2 systemd[1]: systemd-coredump@1-29509-0.service: Deactivated successfully.
Oct 09 09:40:04 compute-2 systemd[1]: systemd-coredump@1-29509-0.service: Consumed 1.133s CPU time.
Oct 09 09:40:04 compute-2 podman[29592]: 2025-10-09 09:40:04.135443856 +0000 UTC m=+0.024765554 container died 7d797a2017b6fe8f4902310e3ed689ee7a3fd50ce65321ab5df44571f3fcb1ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:40:04 compute-2 systemd[1]: var-lib-containers-storage-overlay-90301e70d30233a90751c53fcdc9e2ec380f93735b19e9226a9082fabe201d4c-merged.mount: Deactivated successfully.
Oct 09 09:40:04 compute-2 sudo[29580]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:04 compute-2 podman[29592]: 2025-10-09 09:40:04.154484892 +0000 UTC m=+0.043806568 container remove 7d797a2017b6fe8f4902310e3ed689ee7a3fd50ce65321ab5df44571f3fcb1ac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:40:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:04.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:04 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Main process exited, code=exited, status=139/n/a
Oct 09 09:40:04 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Failed with result 'exit-code'.
Oct 09 09:40:04 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Consumed 1.293s CPU time.
Oct 09 09:40:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:04 compute-2 sudo[29774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifsqypggzcvalnuhptobykapgtsjwdsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002804.3074672-118-207267577356634/AnsiballZ_setup.py'
Oct 09 09:40:04 compute-2 sudo[29774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:04 compute-2 python3.9[29776]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:40:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:40:04 compute-2 sudo[29774]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:05.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:05 compute-2 sudo[29970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuzzrnqjbcduwmjosxzcjbshzazhjheh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002805.2556713-151-24568332735063/AnsiballZ_file.py'
Oct 09 09:40:05 compute-2 sudo[29970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:05 compute-2 python3.9[29972]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:05 compute-2 sudo[29970]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:05 compute-2 ceph-mon[5983]: pgmap v193: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Oct 09 09:40:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:06.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:06 compute-2 sudo[30123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spidmgbtlwxlpbjungrtjtddhzifgzoy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002805.9569461-175-17468678334025/AnsiballZ_command.py'
Oct 09 09:40:06 compute-2 sudo[30123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:06 compute-2 python3.9[30125]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:40:06 compute-2 sudo[30123]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:07 compute-2 sudo[30285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhdnrtxinuofkbobgzhdgkvnrqorsmwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002806.716527-199-95734087469323/AnsiballZ_stat.py'
Oct 09 09:40:07 compute-2 sudo[30285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:07 compute-2 python3.9[30287]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:07 compute-2 sudo[30285]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:07.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:07 compute-2 sudo[30363]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygsfquywhapzxmucgqeiwofhztreggsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002806.716527-199-95734087469323/AnsiballZ_file.py'
Oct 09 09:40:07 compute-2 sudo[30363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/094007 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:40:07 compute-2 python3.9[30365]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:07 compute-2 sudo[30363]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:07 compute-2 sudo[30516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrridyalbkmvredsfomhihapjvsrtgkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002807.7171326-235-44579638410029/AnsiballZ_stat.py'
Oct 09 09:40:07 compute-2 ceph-mon[5983]: pgmap v194: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Oct 09 09:40:07 compute-2 sudo[30516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:08 compute-2 python3.9[30518]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:08 compute-2 sudo[30516]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:08.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:08 compute-2 sudo[30595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvnxztzhkijcdsgblhrkitqwlkcqzjol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002807.7171326-235-44579638410029/AnsiballZ_file.py'
Oct 09 09:40:08 compute-2 sudo[30595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:08 compute-2 python3.9[30597]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:40:08 compute-2 sudo[30595]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:08 compute-2 sudo[30622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:40:08 compute-2 sudo[30622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:40:08 compute-2 sudo[30622]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/094008 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:40:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [NOTICE] 281/094008 (4) : haproxy version is 2.3.17-d1c9119
Oct 09 09:40:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [NOTICE] 281/094008 (4) : path to executable is /usr/local/sbin/haproxy
Oct 09 09:40:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [ALERT] 281/094008 (4) : backend 'backend' has no server available!
Oct 09 09:40:08 compute-2 sudo[30772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekaynumaabxbdgauefbgnalxfgtlzncb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002808.629934-274-236690033299487/AnsiballZ_ini_file.py'
Oct 09 09:40:08 compute-2 sudo[30772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:09 compute-2 python3.9[30774]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:40:09 compute-2 sudo[30772]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:09 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 09:40:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:09.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:09 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 09:40:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:09 compute-2 sudo[30925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhtultnoldsxigrgqtboyoqyeiqomtwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002809.2310855-274-221351991562904/AnsiballZ_ini_file.py'
Oct 09 09:40:09 compute-2 sudo[30925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:09 compute-2 python3.9[30927]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:40:09 compute-2 sudo[30925]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:09 compute-2 sudo[31078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acilcyvykthrulxeumbtvntlvgfxmvzb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002809.7073624-274-234923560297537/AnsiballZ_ini_file.py'
Oct 09 09:40:09 compute-2 sudo[31078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:09 compute-2 ceph-mon[5983]: pgmap v195: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Oct 09 09:40:10 compute-2 python3.9[31080]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:40:10 compute-2 sudo[31078]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:40:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:10.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:40:10 compute-2 sudo[31231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msycgrjkxzvsecultubdpemfahephxva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002810.1895525-274-151772181202724/AnsiballZ_ini_file.py'
Oct 09 09:40:10 compute-2 sudo[31231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:10 compute-2 python3.9[31233]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:40:10 compute-2 sudo[31231]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:11 compute-2 sudo[31383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-echgcllqygwuisgzjxkvgukkdfbssuhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002810.8897243-367-36240157595438/AnsiballZ_dnf.py'
Oct 09 09:40:11 compute-2 sudo[31383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:11 compute-2 python3.9[31385]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:40:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:11.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:11 compute-2 ceph-mon[5983]: pgmap v196: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 767 B/s wr, 2 op/s
Oct 09 09:40:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:12.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:12 compute-2 sudo[31383]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/094012 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 09 09:40:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:12 compute-2 sudo[31538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nuiwzbekraojbtmtppokgewsqdtbhctk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002812.6959567-400-59675163476090/AnsiballZ_setup.py'
Oct 09 09:40:12 compute-2 sudo[31538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:13 compute-2 python3.9[31540]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:40:13 compute-2 sudo[31538]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:13.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:13 compute-2 sudo[31693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzrwjlkotpbnvswiygtxbdypivlfpesc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002813.3303926-424-176670462934014/AnsiballZ_stat.py'
Oct 09 09:40:13 compute-2 sudo[31693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:13 compute-2 python3.9[31695]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:40:13 compute-2 sudo[31693]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:13 compute-2 ceph-mon[5983]: pgmap v197: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 853 B/s wr, 2 op/s
Oct 09 09:40:14 compute-2 sudo[31845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pehokqgarnpbxxtrrboskupqbpvozvhl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002813.921255-451-52792152678093/AnsiballZ_stat.py'
Oct 09 09:40:14 compute-2 sudo[31845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:14.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:14 compute-2 python3.9[31847]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:40:14 compute-2 sudo[31845]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:14 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Scheduled restart job, restart counter is at 2.
Oct 09 09:40:14 compute-2 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:40:14 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Consumed 1.293s CPU time.
Oct 09 09:40:14 compute-2 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:40:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:14 compute-2 podman[31934]: 2025-10-09 09:40:14.572919541 +0000 UTC m=+0.031569438 container create c2aa08c1279fba3793939e7efb04926d3e2b65d03826b931e797c1a842084d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:40:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c5464c8d2742068008111669309389b7bdbab7b22ba6ee593786a450773aa84/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 09 09:40:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c5464c8d2742068008111669309389b7bdbab7b22ba6ee593786a450773aa84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:40:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c5464c8d2742068008111669309389b7bdbab7b22ba6ee593786a450773aa84/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:40:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c5464c8d2742068008111669309389b7bdbab7b22ba6ee593786a450773aa84/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-2.cpioam-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:40:14 compute-2 podman[31934]: 2025-10-09 09:40:14.613105473 +0000 UTC m=+0.071755359 container init c2aa08c1279fba3793939e7efb04926d3e2b65d03826b931e797c1a842084d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:40:14 compute-2 podman[31934]: 2025-10-09 09:40:14.62089627 +0000 UTC m=+0.079546157 container start c2aa08c1279fba3793939e7efb04926d3e2b65d03826b931e797c1a842084d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 09:40:14 compute-2 bash[31934]: c2aa08c1279fba3793939e7efb04926d3e2b65d03826b931e797c1a842084d4f
Oct 09 09:40:14 compute-2 podman[31934]: 2025-10-09 09:40:14.561214836 +0000 UTC m=+0.019864733 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:40:14 compute-2 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:40:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:14 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 09 09:40:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:14 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 09 09:40:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:14 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 09 09:40:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:14 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 09 09:40:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:14 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 09 09:40:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:14 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 09 09:40:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:14 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 09 09:40:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:14 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:40:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:14 compute-2 sudo[32092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppvkrhcfzhvshrpozlirnekgcwgzjdur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002814.5349863-481-187665395203048/AnsiballZ_service_facts.py'
Oct 09 09:40:14 compute-2 sudo[32092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:15 compute-2 python3.9[32094]: ansible-service_facts Invoked
Oct 09 09:40:15 compute-2 network[32111]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 09 09:40:15 compute-2 network[32112]: 'network-scripts' will be removed from distribution in near future.
Oct 09 09:40:15 compute-2 network[32113]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 09 09:40:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:15.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:15 compute-2 ceph-mon[5983]: pgmap v198: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 255 B/s wr, 0 op/s
Oct 09 09:40:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:40:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:16.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:40:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:17.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:17 compute-2 ceph-mon[5983]: pgmap v199: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 09 09:40:18 compute-2 sudo[32092]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.002000025s ======
Oct 09 09:40:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:18.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000025s
Oct 09 09:40:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:18 compute-2 sudo[32403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlsxtjnhfcbjiijqugdlbiftrlohaodq ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1760002818.6812682-521-153268418130621/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1760002818.6812682-521-153268418130621/args'
Oct 09 09:40:18 compute-2 sudo[32403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:19 compute-2 sudo[32403]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:19.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:19 compute-2 sudo[32571]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuiegjraxvjbffaajrgpcmpqqmlnsqzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002819.2754765-554-242105492803487/AnsiballZ_dnf.py'
Oct 09 09:40:19 compute-2 sudo[32571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:19 compute-2 python3.9[32573]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:40:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:19 compute-2 ceph-mon[5983]: pgmap v200: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 09 09:40:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:40:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:20.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:20 compute-2 sudo[32571]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:20 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:40:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:20 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:40:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:21.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:21 compute-2 sudo[32726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbwwpnhiklykmgegsvjpnmjncxqsnhak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002821.1021445-592-144444047730373/AnsiballZ_package_facts.py'
Oct 09 09:40:21 compute-2 sudo[32726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:21 compute-2 python3.9[32728]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct 09 09:40:21 compute-2 ceph-mon[5983]: pgmap v201: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 255 B/s wr, 1 op/s
Oct 09 09:40:22 compute-2 sudo[32726]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:22.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:23 compute-2 sudo[32879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrowbummwthgslilinfteasfqqlxfmkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002822.8732502-622-147736712675483/AnsiballZ_stat.py'
Oct 09 09:40:23 compute-2 sudo[32879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:23 compute-2 python3.9[32881]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:23 compute-2 sudo[32879]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:23.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:23 compute-2 sudo[32957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtwuhyonxvlrilmdkalygvevjmynyryq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002822.8732502-622-147736712675483/AnsiballZ_file.py'
Oct 09 09:40:23 compute-2 sudo[32957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:23 compute-2 python3.9[32960]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:23 compute-2 sudo[32957]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:23 compute-2 ceph-mon[5983]: pgmap v202: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Oct 09 09:40:24 compute-2 sudo[33110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfeccktuqixscfvjauaeeycsproayjxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002823.8800092-659-144791372218845/AnsiballZ_stat.py'
Oct 09 09:40:24 compute-2 sudo[33110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:40:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:24.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:40:24 compute-2 python3.9[33112]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:24 compute-2 sudo[33110]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:24 compute-2 sudo[33189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acqalsnydyechlsleyarhqptzonixppc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002823.8800092-659-144791372218845/AnsiballZ_file.py'
Oct 09 09:40:24 compute-2 sudo[33189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:24 compute-2 python3.9[33191]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:24 compute-2 sudo[33189]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:25.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:25 compute-2 sudo[33342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eftldnmwhiitojxtbnxukngayjdejtiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002825.5157926-714-242253203720977/AnsiballZ_lineinfile.py'
Oct 09 09:40:25 compute-2 sudo[33342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:25 compute-2 ceph-mon[5983]: pgmap v203: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 682 B/s wr, 2 op/s
Oct 09 09:40:26 compute-2 python3.9[33344]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:26 compute-2 sudo[33342]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:26.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:26 compute-2 sshd-session[1523]: Received disconnect from 192.168.26.46 port 34468:11: disconnected by user
Oct 09 09:40:26 compute-2 sshd-session[1523]: Disconnected from user zuul 192.168.26.46 port 34468
Oct 09 09:40:26 compute-2 sshd-session[1520]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:40:26 compute-2 systemd-logind[800]: Session 3 logged out. Waiting for processes to exit.
Oct 09 09:40:26 compute-2 systemd[1]: session-3.scope: Deactivated successfully.
Oct 09 09:40:26 compute-2 systemd[1]: session-3.scope: Consumed 6.265s CPU time.
Oct 09 09:40:26 compute-2 systemd-logind[800]: Removed session 3.
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:27 compute-2 sudo[33509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvtywejniglyiwyepweoilxngvrvphdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002827.024572-758-258473579778035/AnsiballZ_setup.py'
Oct 09 09:40:27 compute-2 sudo[33509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:27.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:27 compute-2 python3.9[33511]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:40:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:27 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:27 compute-2 sudo[33509]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:28 compute-2 ceph-mon[5983]: pgmap v204: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Oct 09 09:40:28 compute-2 sudo[33594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zryyqomircwcwsztenemyfdpcpohvakd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002827.024572-758-258473579778035/AnsiballZ_systemd.py'
Oct 09 09:40:28 compute-2 sudo[33594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:28.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:28 compute-2 python3.9[33596]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:40:28 compute-2 sudo[33594]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:28 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:28 compute-2 sudo[33624]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:40:28 compute-2 sudo[33624]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:40:28 compute-2 sudo[33624]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/094028 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 09 09:40:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:28 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:29 compute-2 sshd-session[29033]: Connection closed by 192.168.122.30 port 49846
Oct 09 09:40:29 compute-2 sshd-session[29029]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:40:29 compute-2 systemd[1]: session-25.scope: Deactivated successfully.
Oct 09 09:40:29 compute-2 systemd[1]: session-25.scope: Consumed 17.742s CPU time.
Oct 09 09:40:29 compute-2 systemd-logind[800]: Session 25 logged out. Waiting for processes to exit.
Oct 09 09:40:29 compute-2 systemd-logind[800]: Removed session 25.
Oct 09 09:40:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:40:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:29.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:40:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/094029 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Oct 09 09:40:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:29 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328002f80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:30 compute-2 ceph-mon[5983]: pgmap v205: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct 09 09:40:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:30.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/094030 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:40:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:30 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:30 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:31.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:31 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:32 compute-2 ceph-mon[5983]: pgmap v206: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Oct 09 09:40:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:40:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:32.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:40:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:32 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328003a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:32 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:40:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:33.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:40:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:33 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:34 compute-2 ceph-mon[5983]: pgmap v207: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Oct 09 09:40:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:40:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:34.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:40:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:34 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330002cb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:34 compute-2 sshd-session[33655]: Accepted publickey for zuul from 192.168.122.30 port 40404 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:40:34 compute-2 systemd-logind[800]: New session 26 of user zuul.
Oct 09 09:40:34 compute-2 systemd[1]: Started Session 26 of User zuul.
Oct 09 09:40:34 compute-2 sshd-session[33655]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:40:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:34 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328003a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:40:35 compute-2 sudo[33808]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-clptvcpwecbkgffstofxnxjtoefqvfjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002834.7268739-28-195069782370069/AnsiballZ_file.py'
Oct 09 09:40:35 compute-2 sudo[33808]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:35 compute-2 python3.9[33810]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:35 compute-2 sudo[33808]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:40:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:35.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:40:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:35 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:35 compute-2 sudo[33961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfxeevjeuvazvyqmqagoklbnpxeektfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002835.3942723-64-273205832914870/AnsiballZ_stat.py'
Oct 09 09:40:35 compute-2 sudo[33961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:35 compute-2 python3.9[33963]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:35 compute-2 sudo[33961]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:36 compute-2 ceph-mon[5983]: pgmap v208: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 09 09:40:36 compute-2 sudo[34039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trmbswouexhqedvapwohopcxidydctmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002835.3942723-64-273205832914870/AnsiballZ_file.py'
Oct 09 09:40:36 compute-2 sudo[34039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:36.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:36 compute-2 python3.9[34041]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:36 compute-2 sudo[34039]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:36 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:36 compute-2 sshd-session[33658]: Connection closed by 192.168.122.30 port 40404
Oct 09 09:40:36 compute-2 sshd-session[33655]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:40:36 compute-2 systemd[1]: session-26.scope: Deactivated successfully.
Oct 09 09:40:36 compute-2 systemd[1]: session-26.scope: Consumed 1.153s CPU time.
Oct 09 09:40:36 compute-2 systemd-logind[800]: Session 26 logged out. Waiting for processes to exit.
Oct 09 09:40:36 compute-2 systemd-logind[800]: Removed session 26.
Oct 09 09:40:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:36 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330004370 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:37.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:37 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328003a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:38 compute-2 ceph-mon[5983]: pgmap v209: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 09 09:40:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:40:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:38.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:40:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:38 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:38 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:40:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:39.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:40:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:39 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330004370 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:40 compute-2 ceph-mon[5983]: pgmap v210: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 09 09:40:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:40.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:40 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328003a80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:40 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:41.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:41 compute-2 sshd-session[34072]: Accepted publickey for zuul from 192.168.122.30 port 52440 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:40:41 compute-2 systemd-logind[800]: New session 27 of user zuul.
Oct 09 09:40:41 compute-2 systemd[1]: Started Session 27 of User zuul.
Oct 09 09:40:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:41 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:41 compute-2 sshd-session[34072]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:40:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:42 compute-2 ceph-mon[5983]: pgmap v211: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Oct 09 09:40:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:42.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:42 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:42 compute-2 python3.9[34227]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:40:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:42 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c0021f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:43 compute-2 sudo[34381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmwzvksarrmaktbnkuevwtpxhgigneno ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002842.7919934-61-151698470336974/AnsiballZ_file.py'
Oct 09 09:40:43 compute-2 sudo[34381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:43 compute-2 python3.9[34383]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:43 compute-2 sudo[34381]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:43.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:43 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c009d80 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:43 compute-2 sudo[34557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmyxfskkhjglmpkkobohqppqbwisschu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002843.5602825-85-16744856738829/AnsiballZ_stat.py'
Oct 09 09:40:43 compute-2 sudo[34557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:44 compute-2 python3.9[34559]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:44 compute-2 sudo[34557]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:44 compute-2 ceph-mon[5983]: pgmap v212: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Oct 09 09:40:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:44.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:44 compute-2 sudo[34636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxixueklgfmoqycmhvsloincjlmignnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002843.5602825-85-16744856738829/AnsiballZ_file.py'
Oct 09 09:40:44 compute-2 sudo[34636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:44 compute-2 python3.9[34638]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.t975grzf recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:44 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:44 compute-2 sudo[34636]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:44 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3334001ed0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:45 compute-2 sudo[34788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-swgeofeveqewizcnavdkvvrktypifotn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002844.8386657-145-183126197160578/AnsiballZ_stat.py'
Oct 09 09:40:45 compute-2 sudo[34788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:45 compute-2 ceph-mon[5983]: pgmap v213: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:40:45 compute-2 python3.9[34790]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:45 compute-2 sudo[34788]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:45 compute-2 sudo[34866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsohzifjppxwiudoakpqcmljmuvhnjnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002844.8386657-145-183126197160578/AnsiballZ_file.py'
Oct 09 09:40:45 compute-2 sudo[34866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:45.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:45 compute-2 python3.9[34868]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.ecbsxskw recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:45 compute-2 sudo[34866]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:45 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00a820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:46 compute-2 sudo[35019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffyuwzaplordtoitssteupiewdfgnkbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002845.8565361-184-3641440779125/AnsiballZ_file.py'
Oct 09 09:40:46 compute-2 sudo[35019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:46 compute-2 python3.9[35021]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:40:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:46.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:46 compute-2 sudo[35019]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:46 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00a820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:46 compute-2 sudo[35172]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwhzlqbracnxjhffiaqbynpdsjxfvfpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002846.3556795-209-149558828017719/AnsiballZ_stat.py'
Oct 09 09:40:46 compute-2 sudo[35172]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:46 compute-2 python3.9[35174]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:46 compute-2 sudo[35172]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:46 compute-2 sudo[35250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sikeglknanhbvnxwgpxqyjbzxvkjprfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002846.3556795-209-149558828017719/AnsiballZ_file.py'
Oct 09 09:40:46 compute-2 sudo[35250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:46 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:47 compute-2 python3.9[35252]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:40:47 compute-2 sudo[35250]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:47 compute-2 sudo[35402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nplnhrqxwveevtxcozhawgogywtxetsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002847.1266909-209-166140282541264/AnsiballZ_stat.py'
Oct 09 09:40:47 compute-2 sudo[35402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:47 compute-2 ceph-mon[5983]: pgmap v214: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:40:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:47.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:47 compute-2 python3.9[35404]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:47 compute-2 sudo[35402]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:47 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33340029d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:47 compute-2 sudo[35481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-riswiovjvdbfhzwqvvrwjckpsmhazapo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002847.1266909-209-166140282541264/AnsiballZ_file.py'
Oct 09 09:40:47 compute-2 sudo[35481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:47 compute-2 python3.9[35483]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:40:47 compute-2 sudo[35481]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:40:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:48.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:40:48 compute-2 sudo[35634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtfqupfsfvkmmnjvftrxioyvsjgpnfqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002847.9658878-277-200531642340797/AnsiballZ_file.py'
Oct 09 09:40:48 compute-2 sudo[35634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:48 compute-2 python3.9[35636]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:48 compute-2 sudo[35634]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:48 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00b140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:48 compute-2 sudo[35730]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:40:48 compute-2 sudo[35730]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:40:48 compute-2 sudo[35730]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:48 compute-2 sudo[35811]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmrkinxhauobuayfodngkoxycxhxflfn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002848.564495-301-151791196929168/AnsiballZ_stat.py'
Oct 09 09:40:48 compute-2 sudo[35811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:48 compute-2 python3.9[35813]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:48 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00b140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:48 compute-2 sudo[35811]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:49 compute-2 sudo[35889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-keotgugasiuahftbnhpmfromewfavirh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002848.564495-301-151791196929168/AnsiballZ_file.py'
Oct 09 09:40:49 compute-2 sudo[35889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:49 compute-2 python3.9[35891]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:49 compute-2 sudo[35889]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:49 compute-2 ceph-mon[5983]: pgmap v215: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:40:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:49.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:49 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:49 compute-2 sudo[36042]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpomdvgyjfnjcwzgpbguhvsdqbccpcfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002849.4723146-337-6653874849242/AnsiballZ_stat.py'
Oct 09 09:40:49 compute-2 sudo[36042]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:49 compute-2 python3.9[36044]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:49 compute-2 sudo[36042]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:49 compute-2 sudo[36120]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbxgbgkrdlammtuhdczvmurgyeriyrvo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002849.4723146-337-6653874849242/AnsiballZ_file.py'
Oct 09 09:40:49 compute-2 sudo[36120]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:50 compute-2 python3.9[36122]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:50 compute-2 sudo[36120]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:50.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:40:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:50 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33340029d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:50 compute-2 sudo[36273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-renwvxqhsqzyypejwajcitqhwkuacxdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002850.3187835-373-214224933627461/AnsiballZ_systemd.py'
Oct 09 09:40:50 compute-2 sudo[36273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:50 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00b140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:51 compute-2 python3.9[36275]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:40:51 compute-2 systemd[1]: Reloading.
Oct 09 09:40:51 compute-2 systemd-rc-local-generator[36299]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:40:51 compute-2 systemd-sysv-generator[36302]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:40:51 compute-2 sudo[36273]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000999994s ======
Oct 09 09:40:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:51.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999994s
Oct 09 09:40:51 compute-2 ceph-mon[5983]: pgmap v216: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:40:51 compute-2 sudo[36463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbrqzdkxgthzudyfjuvwefopdnqjlvnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002851.391362-397-218140614408829/AnsiballZ_stat.py'
Oct 09 09:40:51 compute-2 sudo[36463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:51 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00b140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:51 compute-2 python3.9[36465]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:51 compute-2 sudo[36463]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:51 compute-2 sudo[36541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfsbwqhbkgseuagumqibadnqwphlcvqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002851.391362-397-218140614408829/AnsiballZ_file.py'
Oct 09 09:40:51 compute-2 sudo[36541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:52 compute-2 python3.9[36543]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:52 compute-2 sudo[36541]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:52.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:52 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:52 compute-2 sudo[36694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixygefcadujttczilqzzowhbilczwywf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002852.2533956-433-151783122264641/AnsiballZ_stat.py'
Oct 09 09:40:52 compute-2 sudo[36694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:52 compute-2 python3.9[36696]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:52 compute-2 sudo[36694]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:52 compute-2 sudo[36772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwyjwoecunafhgtrrrbzcbntntiaapgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002852.2533956-433-151783122264641/AnsiballZ_file.py'
Oct 09 09:40:52 compute-2 sudo[36772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:52 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:52 compute-2 python3.9[36774]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:52 compute-2 sudo[36772]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:53 compute-2 sudo[36924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfuanvdlvvobzolgenodeaxjujjoqjhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002853.1002624-469-139597100419187/AnsiballZ_systemd.py'
Oct 09 09:40:53 compute-2 sudo[36924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:53.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:53 compute-2 ceph-mon[5983]: pgmap v217: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 09 09:40:53 compute-2 python3.9[36926]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:40:53 compute-2 systemd[1]: Reloading.
Oct 09 09:40:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:53 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:53 compute-2 systemd-rc-local-generator[36950]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:40:53 compute-2 systemd-sysv-generator[36953]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:40:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:53 compute-2 systemd[1]: Starting Create netns directory...
Oct 09 09:40:53 compute-2 systemd[9018]: Created slice User Background Tasks Slice.
Oct 09 09:40:53 compute-2 systemd[9018]: Starting Cleanup of User's Temporary Files and Directories...
Oct 09 09:40:53 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 09 09:40:53 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 09 09:40:53 compute-2 systemd[1]: Finished Create netns directory.
Oct 09 09:40:53 compute-2 systemd[9018]: Finished Cleanup of User's Temporary Files and Directories.
Oct 09 09:40:53 compute-2 sudo[36924]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:54.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:54 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00ba60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:54 compute-2 python3.9[37120]: ansible-ansible.builtin.service_facts Invoked
Oct 09 09:40:54 compute-2 network[37137]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 09 09:40:54 compute-2 network[37138]: 'network-scripts' will be removed from distribution in near future.
Oct 09 09:40:54 compute-2 network[37139]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 09 09:40:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:54 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00ba60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:55.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:55 compute-2 ceph-mon[5983]: pgmap v218: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:40:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:55 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328005360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:56 compute-2 sudo[37239]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:40:56 compute-2 sudo[37239]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:40:56 compute-2 sudo[37239]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:56 compute-2 sudo[37268]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:40:56 compute-2 sudo[37268]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:40:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:56.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:56 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:56 compute-2 sudo[37268]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:56 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00ba60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:57 compute-2 sudo[37483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzsgkqmufrylzgggliklukmfberpqqub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002857.148058-548-176675153710743/AnsiballZ_stat.py'
Oct 09 09:40:57 compute-2 sudo[37483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:57.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:57 compute-2 ceph-mon[5983]: pgmap v219: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:40:57 compute-2 python3.9[37485]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:57 compute-2 sudo[37483]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:57 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00ba60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:57 compute-2 sudo[37562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gncamdqgugvnysiknsnonokacjyvearg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002857.148058-548-176675153710743/AnsiballZ_file.py'
Oct 09 09:40:57 compute-2 sudo[37562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:57 compute-2 python3.9[37564]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:57 compute-2 sudo[37562]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:40:58.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:58 compute-2 sudo[37715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apbjnnjcwnbjxdjzdswgecxleseldfuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002858.1550496-587-232840395680794/AnsiballZ_file.py'
Oct 09 09:40:58 compute-2 sudo[37715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:58 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328005360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:58 compute-2 python3.9[37717]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:58 compute-2 sudo[37715]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:58 compute-2 sudo[37867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frdrozlwkejicapznjldmyztzrjpmhdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002858.6817756-611-221224283828367/AnsiballZ_stat.py'
Oct 09 09:40:58 compute-2 sudo[37867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:58 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:59 compute-2 python3.9[37869]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:40:59 compute-2 sudo[37867]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:59 compute-2 sudo[37945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzkwgveaekeslcqpatmsliwvyyfmdgaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002858.6817756-611-221224283828367/AnsiballZ_file.py'
Oct 09 09:40:59 compute-2 sudo[37945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:40:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:40:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:40:59 compute-2 ceph-mon[5983]: pgmap v220: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:40:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:40:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:40:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:40:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:40:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:40:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:40:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:40:59 compute-2 python3.9[37947]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:40:59 compute-2 sudo[37945]: pam_unix(sudo:session): session closed for user root
Oct 09 09:40:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:40:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:40:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:40:59.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:40:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:40:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:40:59 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00ba60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:40:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:40:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:40:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:40:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:00 compute-2 sudo[38099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhvllazdnaxwjypxxsennvjptzurshsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002859.7737303-658-76148895739217/AnsiballZ_timezone.py'
Oct 09 09:41:00 compute-2 sudo[38099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:00.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:00 compute-2 python3.9[38101]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct 09 09:41:00 compute-2 systemd[1]: Starting Time & Date Service...
Oct 09 09:41:00 compute-2 systemd[1]: Started Time & Date Service.
Oct 09 09:41:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:00 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00ba60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:00 compute-2 sudo[38099]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:00 compute-2 sudo[38255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irwuggbnfnsllwqlojbdjnwgkvlydmfy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002860.7172275-683-197116273608125/AnsiballZ_file.py'
Oct 09 09:41:00 compute-2 sudo[38255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:00 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328005360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:01 compute-2 python3.9[38257]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:01 compute-2 sudo[38255]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:01 compute-2 ceph-mon[5983]: pgmap v221: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:01.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:01 compute-2 sudo[38407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgfpcbntwhflyoqwceyltqbdlvzplenu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002861.231495-707-89199287535779/AnsiballZ_stat.py'
Oct 09 09:41:01 compute-2 sudo[38407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:01 compute-2 python3.9[38409]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:01 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:01 compute-2 sudo[38407]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:01 compute-2 sudo[38486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfoqvggsydpcgnadhnzvzzyejvirqkgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002861.231495-707-89199287535779/AnsiballZ_file.py'
Oct 09 09:41:01 compute-2 sudo[38486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:01 compute-2 python3.9[38488]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:01 compute-2 sudo[38486]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:01 compute-2 sudo[38489]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:41:01 compute-2 sudo[38489]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:41:01 compute-2 sudo[38489]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:02.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:02 compute-2 sudo[38664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjdddlxyffjozjncrcttzkpfntwpfwtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002862.1253033-743-37484657198559/AnsiballZ_stat.py'
Oct 09 09:41:02 compute-2 sudo[38664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:02 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00ba60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:02 compute-2 python3.9[38666]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:02 compute-2 sudo[38664]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:02 compute-2 sudo[38742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pynszvnoaqdexnezwosbytmubeblreiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002862.1253033-743-37484657198559/AnsiballZ_file.py'
Oct 09 09:41:02 compute-2 sudo[38742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:02 compute-2 python3.9[38744]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.n3x6eb4v recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:02 compute-2 sudo[38742]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:41:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:41:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:02 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00ba60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:03 compute-2 sudo[38894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vchzmuhffpxfauvszrchffjrvqdjjukp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002862.9675527-779-186051571361809/AnsiballZ_stat.py'
Oct 09 09:41:03 compute-2 sudo[38894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:03 compute-2 python3.9[38896]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:03 compute-2 sudo[38894]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:03.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:03 compute-2 sudo[38972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvsuqlutuzxalhsvckgzgkiuagwtneuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002862.9675527-779-186051571361809/AnsiballZ_file.py'
Oct 09 09:41:03 compute-2 sudo[38972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:03 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328005360 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:03 compute-2 python3.9[38975]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:03 compute-2 sudo[38972]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:03 compute-2 ceph-mon[5983]: pgmap v222: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 09 09:41:04 compute-2 sudo[39126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjhppxvjebnqqtlefjkuotupzvccrjil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002863.8619683-818-62624797644865/AnsiballZ_command.py'
Oct 09 09:41:04 compute-2 sudo[39126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:04.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:04 compute-2 python3.9[39128]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:41:04 compute-2 sudo[39126]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:04 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:04 compute-2 sudo[39279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcjdrvewjgkaukshttwuthvlehlsywyq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760002864.485713-842-150487018793432/AnsiballZ_edpm_nftables_from_files.py'
Oct 09 09:41:04 compute-2 sudo[39279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:41:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:04 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:04 compute-2 python3[39281]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 09 09:41:04 compute-2 sudo[39279]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:05 compute-2 sudo[39431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyagqnntkqbzxdabuloeorcktevikecp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002865.1511204-866-189776803395238/AnsiballZ_stat.py'
Oct 09 09:41:05 compute-2 sudo[39431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:05.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:05 compute-2 python3.9[39433]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:05 compute-2 sudo[39431]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:05 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:05 compute-2 sudo[39510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrjgbedbqppasevqljybxppvyrwidsfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002865.1511204-866-189776803395238/AnsiballZ_file.py'
Oct 09 09:41:05 compute-2 sudo[39510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:05 compute-2 python3.9[39512]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:05 compute-2 ceph-mon[5983]: pgmap v223: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:05 compute-2 sudo[39510]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:06.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:06 compute-2 sudo[39663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrmyyzzzvrsudgnpcwuunayyemzeyfqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002866.0312948-901-70080999468057/AnsiballZ_stat.py'
Oct 09 09:41:06 compute-2 sudo[39663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:06 compute-2 python3.9[39665]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:06 compute-2 sudo[39663]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:06 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328006070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:06 compute-2 sudo[39741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-defteaeyxqdgwnyvvrhmxuslckdfgbmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002866.0312948-901-70080999468057/AnsiballZ_file.py'
Oct 09 09:41:06 compute-2 sudo[39741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:06 compute-2 python3.9[39743]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:06 compute-2 sudo[39741]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:06 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3334003b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:07 compute-2 sudo[39893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwdqdfedbkxjvfzcebbgliblnewzyglu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002866.9373798-938-221741451641204/AnsiballZ_stat.py'
Oct 09 09:41:07 compute-2 sudo[39893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:07 compute-2 python3.9[39895]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:07 compute-2 sudo[39893]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:07.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:07 compute-2 sudo[39971]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzfxppnmumxleoxnenutunsnirkcnynp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002866.9373798-938-221741451641204/AnsiballZ_file.py'
Oct 09 09:41:07 compute-2 sudo[39971]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:07 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00cf50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:07 compute-2 python3.9[39973]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:07 compute-2 sudo[39971]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:07 compute-2 ceph-mon[5983]: pgmap v224: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:08 compute-2 sudo[40124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btnxzcrvnrmrkmzeusvikjciwuoyjlpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002867.7823653-974-117605098459565/AnsiballZ_stat.py'
Oct 09 09:41:08 compute-2 sudo[40124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:08 compute-2 python3.9[40126]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:08 compute-2 sudo[40124]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:08.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:08 compute-2 sudo[40203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qplvheoauejadowatdqdmxkfekwutazy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002867.7823653-974-117605098459565/AnsiballZ_file.py'
Oct 09 09:41:08 compute-2 sudo[40203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:08 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:08 compute-2 python3.9[40205]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:08 compute-2 sudo[40203]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:08 compute-2 sudo[40253]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:41:08 compute-2 sudo[40253]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:41:08 compute-2 sudo[40253]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:08 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328006070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:08 compute-2 sudo[40380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwkqmfsuamcbustnkzbxfqnmvxcodhhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002868.694675-1010-9536826048217/AnsiballZ_stat.py'
Oct 09 09:41:08 compute-2 sudo[40380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:09 compute-2 python3.9[40382]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:09 compute-2 sudo[40380]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:09 compute-2 sudo[40458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssoxdinptyekihgxqjwbpnljfdrcmvww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002868.694675-1010-9536826048217/AnsiballZ_file.py'
Oct 09 09:41:09 compute-2 sudo[40458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:09.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:09 compute-2 python3.9[40460]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:09 compute-2 sudo[40458]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:09 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3334004480 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:09 compute-2 sudo[40611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgsrfztwomeseuwsaypcwpjyzuzgurqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002869.6951845-1049-111766417012100/AnsiballZ_command.py'
Oct 09 09:41:09 compute-2 sudo[40611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:09 compute-2 ceph-mon[5983]: pgmap v225: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:10 compute-2 python3.9[40613]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:41:10 compute-2 sudo[40611]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:10.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:10 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00cf50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:10 compute-2 sudo[40767]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpinyisxgjlwvxqqdsldilcxfpicygjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002870.2163317-1073-90440113294379/AnsiballZ_blockinfile.py'
Oct 09 09:41:10 compute-2 sudo[40767]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:10 compute-2 python3.9[40769]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:10 compute-2 sudo[40767]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:10 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:11 compute-2 sudo[40919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flrkmgklrpnwsjbhbrpagpqqjfyxhvvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002870.8793583-1100-77872754107726/AnsiballZ_file.py'
Oct 09 09:41:11 compute-2 sudo[40919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:11 compute-2 python3.9[40921]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:11 compute-2 sudo[40919]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:11.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:11 compute-2 sudo[41072]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cbmbyzfondxoybykvjfxdlqrtvqclkxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002871.3297007-1100-156150593672101/AnsiballZ_file.py'
Oct 09 09:41:11 compute-2 sudo[41072]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:11 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328006070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:11 compute-2 python3.9[41074]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:11 compute-2 sudo[41072]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:11 compute-2 ceph-mon[5983]: pgmap v226: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:12 compute-2 sudo[41224]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prauzchorjistlzyfypodjmrciyltala ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002871.8369882-1145-26978875156424/AnsiballZ_mount.py'
Oct 09 09:41:12 compute-2 sudo[41224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:12.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:12 compute-2 python3.9[41227]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 09 09:41:12 compute-2 sudo[41224]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:12 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328006070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:12 compute-2 sudo[41377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnlsypvqjaplnkjfzgjhxlvhgfagxmqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002872.3985806-1145-53003011740472/AnsiballZ_mount.py'
Oct 09 09:41:12 compute-2 sudo[41377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:12 compute-2 python3.9[41379]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct 09 09:41:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:12 compute-2 sudo[41377]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:12 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00cf50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:13 compute-2 sshd-session[34076]: Connection closed by 192.168.122.30 port 52440
Oct 09 09:41:13 compute-2 sshd-session[34072]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:41:13 compute-2 systemd[1]: session-27.scope: Deactivated successfully.
Oct 09 09:41:13 compute-2 systemd[1]: session-27.scope: Consumed 20.682s CPU time.
Oct 09 09:41:13 compute-2 systemd-logind[800]: Session 27 logged out. Waiting for processes to exit.
Oct 09 09:41:13 compute-2 systemd-logind[800]: Removed session 27.
Oct 09 09:41:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:13.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:13 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:13 compute-2 ceph-mon[5983]: pgmap v227: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 09 09:41:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:14.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:14 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:14 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328006070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:15.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:15 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00cf50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:15 compute-2 ceph-mon[5983]: pgmap v228: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:16.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:16 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33480027a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:16 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:17.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:17 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328006070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:17 compute-2 sshd-session[41410]: Accepted publickey for zuul from 192.168.122.30 port 39986 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:41:17 compute-2 systemd-logind[800]: New session 28 of user zuul.
Oct 09 09:41:17 compute-2 systemd[1]: Started Session 28 of User zuul.
Oct 09 09:41:17 compute-2 sshd-session[41410]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:41:17 compute-2 ceph-mon[5983]: pgmap v229: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:18.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:18 compute-2 sudo[41564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dirdoegythapkmkrlooydhewsywkbddx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002877.927916-20-27763556134390/AnsiballZ_tempfile.py'
Oct 09 09:41:18 compute-2 sudo[41564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:18 compute-2 python3.9[41566]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct 09 09:41:18 compute-2 sudo[41564]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:18 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00cf50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:18 compute-2 sudo[41716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tbzlzmlijqisiqakrxdsxeuzmvwfovju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002878.5537395-56-180167784671530/AnsiballZ_stat.py'
Oct 09 09:41:18 compute-2 sudo[41716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:18 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33480032c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:19 compute-2 python3.9[41718]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:41:19 compute-2 sudo[41716]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:19.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:19 compute-2 sudo[41870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfaopwkzeyuaybfiajltxmnrzammwtsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002879.1778865-81-212363536762224/AnsiballZ_slurp.py'
Oct 09 09:41:19 compute-2 sudo[41870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:19 compute-2 python3.9[41872]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct 09 09:41:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:19 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:19 compute-2 sudo[41870]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:19 compute-2 ceph-mon[5983]: pgmap v230: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:41:19 compute-2 sudo[42023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmbrhmrfajnvfpzapxrwfzwfqagdcvbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002879.7659876-104-31828867044063/AnsiballZ_stat.py'
Oct 09 09:41:19 compute-2 sudo[42023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:20 compute-2 python3.9[42025]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.5bjza0nn follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:20 compute-2 sudo[42023]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:20.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:20 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328006070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:20 compute-2 sudo[42149]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvmwvgshgguasirwtsmjdefskrjzayfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002879.7659876-104-31828867044063/AnsiballZ_copy.py'
Oct 09 09:41:20 compute-2 sudo[42149]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:20 compute-2 python3.9[42151]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.5bjza0nn mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002879.7659876-104-31828867044063/.source.5bjza0nn _original_basename=.2dnpcni6 follow=False checksum=231ee42d81be70362d898b48675a8dc8dc6887b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:20 compute-2 sudo[42149]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:20 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00cf50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:21 compute-2 sudo[42301]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izcmzobabivggvgvmxgeybzokezrtpfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002880.8491807-149-245321112622040/AnsiballZ_setup.py'
Oct 09 09:41:21 compute-2 sudo[42301]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:21.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:21 compute-2 python3.9[42303]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:41:21 compute-2 sudo[42301]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:21 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33480032c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:21 compute-2 ceph-mon[5983]: pgmap v231: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:22 compute-2 sudo[42454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxqgmlrfpakwbnqvmlllbkezsfcnczkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002881.7435088-174-94541625316055/AnsiballZ_blockinfile.py'
Oct 09 09:41:22 compute-2 sudo[42454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:22.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:22 compute-2 python3.9[42456]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDEdAe+aHzafP9dhAtdIAtOm2sC12803SCpA/3rl1ydGqAiReivZh0j/TO2wBzoqsan7nzM7eG4TWSpqK+0ZBgBjrUjB9Cj1eCLSLOLFpIUpLcs70zpiXFEg4VCxifit+r7hVmAjbLpb7lUOEBeuKAC+NijlzOD2XrC+yd3AhBkIuX/kEOqNS457QburXRcER973lXO7bXpB0owCrgGAzOsy1i7FT6Zz4mSB7l2Iy2drh0BXBPs+laJ9chzaIYm3t6/xdGegDzZd9R0R/aKxaO2CGff8by/bJ8Ga/DZNziOBiuIImaU3kBJc76SWraZeoiOMwDTosKuZfFadJWywRHIP1xUSkKdLGnB0MzpGtOhcIWX642g/WIM4+Y078U5nwtvOcNHpA/uT9uRc7nBCEzPpJVHtyVbh0kQ9x86pCj83Ph6ZZ1RPGolhJ6oztdGyl5QMj/rkG45+H83p9c18d5vzsZzrcKaYtBEg3BJ80PfCqFw5Al9hHq/55Yd0D5PiK8=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN+sxaZ1V99vc+E5ar8KEv4Hqy68kJM/buHn1/XxovLr
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDc5CVbyus+PfQGnwFQkfkACIJgIJPRc/fJ1ooz9D/2T/S79sUKftWyZ1JOurJ8lQdLc+LgRGezTzhfuY3R3F6E=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCow+01n6Hl7e4y/xRpTIYbwm1BUam3jmz5ScpeEvosFn7TfszdHV/Do5gTioKon9F6x7Kn2fhkWobIt7rTveNaK0lE2p35tJDQJQ5zYJD3N4aWHdvfaigYEXYaH3OOpmqEhRw/IyxGzW1MS8OfGUNyziUYt99LLYhcEkDneuZnPOI2444OzzU0pYxCtaVSevz9aDR2yi9BWKNIP8iMTNqu9UpE9IaOANEDrZu7gbGMBTDiR1lYzo1peJrtAa/cpTF9DoFnddTbpOMLjd6HaRrnifcc9fP1YtxWn8T1ldTjecUUCp2yo6ycdOUdBiJG9yWw1gI7SXYjeHJbX/1QS6HWd5DWxJFbSf0zP5d5BWyDf5+TFu1/gImUA0HT8WOYb4tm1QH1NAThcRLvtUFg32CcbqOnUyAxW0wDeGoLCW7EERN9OKr11fwlYjdyW/TbqYWRn0J2WhZa4OoZ/C4m9ug6PP7SEo9wXLqN9t4eArVkbeTemzPigVRqNrD2eywEU4k=
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCkglmiqZQwqqMItgWA6O04td1K/U4vAgm36NE9rj3U
                                            compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLD7v/1C4ThvDcQi8c4DTsjkszkaGHBX0ZNWy5MwKVH3Qt7bVSlXkD8SB3/nhOUlBIzdAK/JQpzVyqfy+61YZMk=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDKE7qnQSdbsdsOaGWRokEAHfuZHqF4BkfkIlbsIxi6+FzXfmziMPrsg1PoVUBFOzaP55y6aRtUEaXoCsB+KxPGXhHnh3IdEYTUa5EvJs6/mUlEqIwltt8CLNKUrDV6N38V1v5gaRPIAI5iTwtbap14q+0iDF8MVi8MPKlkqoL/+Z49sJ4HqR31EZpD4cWKso/dkKZQSuVQg+TgJ3bnUKIRYPDS7fjVuZpr0KMyU+v4wjBKXvles8lctvRXdfpY2/33XtBG2af+p/+5mg47b5ylWC3wISLO590WzC4X2T0Pv1a6I9O/Dt3V8xyTfzbqi4ia9/kwNBJg1GGqNBssdedHK3AZDOTSd9U+/C1R9oBDXZ7nSo3hIzMQvrm5DXkthix56gd3x9MrMMzc+wTlFtlm2XwpMg7PtdxMZK++rIfPVxzKXBBQsdDd0W3cbam616N/XERaDJKIUqnPe5sE1qhpaFt8aNtwg+buZpYK5ubLbuJZpASgSC6dIuDsEIk6Af8=
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEtxusJG2g5S2RnWLxtcDjdiTuv+VWibld9MVjIgPUzn
                                            compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBG1pQwHgci56FauRELJKl6O8ntBVH1APLVaVNPCodlG/V+A+h79tYrSqi3QKycc18niRc7Eiq8wWQ8VbX+OhkmY=
                                             create=True mode=0644 path=/tmp/ansible.5bjza0nn state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:22 compute-2 sudo[42454]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:22 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33480032c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:22 compute-2 sudo[42607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctjabdcqflgdtrzysbzljogomngvmsgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002882.391233-198-73887279149310/AnsiballZ_command.py'
Oct 09 09:41:22 compute-2 sudo[42607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:22 compute-2 python3.9[42609]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.5bjza0nn' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:41:22 compute-2 sudo[42607]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:22 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:23 compute-2 sudo[42761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jggvevtfmwchbwfyaiqcwpzktycqveuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002883.0242605-222-9836416592805/AnsiballZ_file.py'
Oct 09 09:41:23 compute-2 sudo[42761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:23.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:23 compute-2 python3.9[42763]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.5bjza0nn state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:23 compute-2 sudo[42761]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:23 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c00cf50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:23 compute-2 sshd-session[41413]: Connection closed by 192.168.122.30 port 39986
Oct 09 09:41:23 compute-2 sshd-session[41410]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:41:23 compute-2 systemd[1]: session-28.scope: Deactivated successfully.
Oct 09 09:41:23 compute-2 systemd[1]: session-28.scope: Consumed 3.859s CPU time.
Oct 09 09:41:23 compute-2 systemd-logind[800]: Session 28 logged out. Waiting for processes to exit.
Oct 09 09:41:23 compute-2 systemd-logind[800]: Removed session 28.
Oct 09 09:41:23 compute-2 ceph-mon[5983]: pgmap v232: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 09 09:41:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:24.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:24 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33480032c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:24 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328006070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000999994s ======
Oct 09 09:41:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:25.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000999994s
Oct 09 09:41:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:25 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f334c003820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:25 compute-2 ceph-mon[5983]: pgmap v233: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:26.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:26 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33480032c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:27.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:27 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3328006070 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:27 compute-2 ceph-mon[5983]: pgmap v234: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:28.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:28 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f33480032c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:28 compute-2 sudo[42795]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:41:28 compute-2 sudo[42795]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:41:28 compute-2 sudo[42795]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:28 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:29 compute-2 sshd-session[42820]: Accepted publickey for zuul from 192.168.122.30 port 44482 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:41:29 compute-2 systemd-logind[800]: New session 29 of user zuul.
Oct 09 09:41:29 compute-2 systemd[1]: Started Session 29 of User zuul.
Oct 09 09:41:29 compute-2 sshd-session[42820]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:41:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:29.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:29 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f3330005bb0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:29 compute-2 python3.9[42975]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:41:29 compute-2 ceph-mon[5983]: pgmap v235: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:41:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:30.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:41:30 compute-2 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 09 09:41:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:30 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c001340 fd 38 proxy header rest len failed header rlen = % (will set dead)
Oct 09 09:41:30 compute-2 sudo[43132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-voswdcmcvtxggvejnejfenjndvzqyzjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002890.136642-58-174846619369239/AnsiballZ_systemd.py'
Oct 09 09:41:30 compute-2 sudo[43132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:30 compute-2 python3.9[43134]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 09 09:41:30 compute-2 sudo[43132]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[31977]: 09/10/2025 09:41:30 : epoch 68e782fe : compute-2 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f333c001340 fd 38 proxy ignored for local
Oct 09 09:41:30 compute-2 kernel: ganesha.nfsd[34075]: segfault at 50 ip 00007f33e572732e sp 00007f3399ffa210 error 4 in libntirpc.so.5.8[7f33e570c000+2c000] likely on CPU 0 (core 0, socket 0)
Oct 09 09:41:30 compute-2 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Oct 09 09:41:30 compute-2 systemd[1]: Started Process Core Dump (PID 43161/UID 0).
Oct 09 09:41:31 compute-2 sudo[43288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbdizrhigcgpeejaxxweeuvfgvftohqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002891.0196261-82-260864731880189/AnsiballZ_systemd.py'
Oct 09 09:41:31 compute-2 sudo[43288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:41:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:31.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:41:31 compute-2 python3.9[43290]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 09:41:31 compute-2 sudo[43288]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:31 compute-2 ceph-mon[5983]: pgmap v236: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:31 compute-2 sudo[43442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eopwynpgjsnmiemseqynrmhjzefpyvjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002891.7081058-109-139750333925434/AnsiballZ_command.py'
Oct 09 09:41:31 compute-2 sudo[43442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:32 compute-2 systemd-coredump[43162]: Process 31981 (ganesha.nfsd) of user 0 dumped core.
                                                   
                                                   Stack trace of thread 54:
                                                   #0  0x00007f33e572732e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)
                                                   ELF object binary architecture: AMD x86-64
Oct 09 09:41:32 compute-2 systemd[1]: systemd-coredump@2-43161-0.service: Deactivated successfully.
Oct 09 09:41:32 compute-2 systemd[1]: systemd-coredump@2-43161-0.service: Consumed 1.020s CPU time.
Oct 09 09:41:32 compute-2 podman[43452]: 2025-10-09 09:41:32.146217206 +0000 UTC m=+0.030112942 container died c2aa08c1279fba3793939e7efb04926d3e2b65d03826b931e797c1a842084d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 09 09:41:32 compute-2 python3.9[43444]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:41:32 compute-2 systemd[1]: var-lib-containers-storage-overlay-7c5464c8d2742068008111669309389b7bdbab7b22ba6ee593786a450773aa84-merged.mount: Deactivated successfully.
Oct 09 09:41:32 compute-2 podman[43452]: 2025-10-09 09:41:32.167553026 +0000 UTC m=+0.051448763 container remove c2aa08c1279fba3793939e7efb04926d3e2b65d03826b931e797c1a842084d4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct 09 09:41:32 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Main process exited, code=exited, status=139/n/a
Oct 09 09:41:32 compute-2 sudo[43442]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:32.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:32 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Failed with result 'exit-code'.
Oct 09 09:41:32 compute-2 sudo[43637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjpgaefdjyuzumlcmswubgfrfualviqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002892.3149834-133-48975118521328/AnsiballZ_stat.py'
Oct 09 09:41:32 compute-2 sudo[43637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:32 compute-2 python3.9[43639]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:41:32 compute-2 sudo[43637]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:33 compute-2 sudo[43789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgiplpzygneqgumxtztsejhnfxhzfufe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002893.0105693-160-81466494238168/AnsiballZ_file.py'
Oct 09 09:41:33 compute-2 sudo[43789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:33.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:33 compute-2 python3.9[43791]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:33 compute-2 sudo[43789]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:33 compute-2 sshd-session[42824]: Connection closed by 192.168.122.30 port 44482
Oct 09 09:41:33 compute-2 sshd-session[42820]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:41:33 compute-2 systemd-logind[800]: Session 29 logged out. Waiting for processes to exit.
Oct 09 09:41:33 compute-2 systemd[1]: session-29.scope: Deactivated successfully.
Oct 09 09:41:33 compute-2 systemd[1]: session-29.scope: Consumed 3.010s CPU time.
Oct 09 09:41:33 compute-2 systemd-logind[800]: Removed session 29.
Oct 09 09:41:33 compute-2 ceph-mon[5983]: pgmap v237: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Oct 09 09:41:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:41:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:34.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:41:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:41:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:41:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:35.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:41:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:35 compute-2 ceph-mon[5983]: pgmap v238: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:36.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/094136 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:41:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:37.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:38 compute-2 ceph-mon[5983]: pgmap v239: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:38.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:38 compute-2 sshd-session[43822]: Accepted publickey for zuul from 192.168.122.30 port 34000 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:41:38 compute-2 systemd-logind[800]: New session 30 of user zuul.
Oct 09 09:41:38 compute-2 systemd[1]: Started Session 30 of User zuul.
Oct 09 09:41:38 compute-2 sshd-session[43822]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:41:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:39.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:39 compute-2 python3.9[43975]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:41:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:40 compute-2 ceph-mon[5983]: pgmap v240: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:41:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:40.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:41:40 compute-2 sudo[44131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgafovokeeayajbznqvakzfpphvqlasu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002900.1225464-64-19186860382788/AnsiballZ_setup.py'
Oct 09 09:41:40 compute-2 sudo[44131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:40 compute-2 python3.9[44133]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:41:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:40 compute-2 sudo[44131]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:41 compute-2 sudo[44215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipkkpfstcoxejcynxtlqjdtwjkdnvcey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002900.1225464-64-19186860382788/AnsiballZ_dnf.py'
Oct 09 09:41:41 compute-2 sudo[44215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:41 compute-2 python3.9[44217]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct 09 09:41:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:41.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:42 compute-2 ceph-mon[5983]: pgmap v241: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Oct 09 09:41:42 compute-2 sudo[44215]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:42.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:42 compute-2 systemd[1]: ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609@nfs.cephfs.1.0.compute-2.cpioam.service: Scheduled restart job, restart counter is at 3.
Oct 09 09:41:42 compute-2 systemd[1]: Stopped Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:41:42 compute-2 systemd[1]: Starting Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609...
Oct 09 09:41:42 compute-2 podman[44360]: 2025-10-09 09:41:42.576323776 +0000 UTC m=+0.032591630 container create 497c7afc8fec44ce46000a7251f8bab138912e15672ce0c2da150a022a264c99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Oct 09 09:41:42 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad8b3d376e571fbe50bff146c3e9c037b7a8efb22dbf0ba051b4119ba4946c1/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Oct 09 09:41:42 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad8b3d376e571fbe50bff146c3e9c037b7a8efb22dbf0ba051b4119ba4946c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 09:41:42 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad8b3d376e571fbe50bff146c3e9c037b7a8efb22dbf0ba051b4119ba4946c1/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:41:42 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ad8b3d376e571fbe50bff146c3e9c037b7a8efb22dbf0ba051b4119ba4946c1/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.1.0.compute-2.cpioam-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Oct 09 09:41:42 compute-2 podman[44360]: 2025-10-09 09:41:42.610781588 +0000 UTC m=+0.067049441 container init 497c7afc8fec44ce46000a7251f8bab138912e15672ce0c2da150a022a264c99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 09 09:41:42 compute-2 podman[44360]: 2025-10-09 09:41:42.616751923 +0000 UTC m=+0.073019777 container start 497c7afc8fec44ce46000a7251f8bab138912e15672ce0c2da150a022a264c99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 09:41:42 compute-2 bash[44360]: 497c7afc8fec44ce46000a7251f8bab138912e15672ce0c2da150a022a264c99
Oct 09 09:41:42 compute-2 podman[44360]: 2025-10-09 09:41:42.564337038 +0000 UTC m=+0.020604912 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 09:41:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Oct 09 09:41:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Oct 09 09:41:42 compute-2 systemd[1]: Started Ceph nfs.cephfs.1.0.compute-2.cpioam for 286f8bf0-da72-5823-9a4e-ac4457d9e609.
Oct 09 09:41:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Oct 09 09:41:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Oct 09 09:41:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Oct 09 09:41:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Oct 09 09:41:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Oct 09 09:41:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:41:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:42 compute-2 python3.9[44424]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:41:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:41:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:43.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:41:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:44 compute-2 ceph-mon[5983]: pgmap v242: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct 09 09:41:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:44.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:44 compute-2 python3.9[44615]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 09 09:41:45 compute-2 python3.9[44765]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:41:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:45.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:45 compute-2 python3.9[44916]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:41:46 compute-2 ceph-mon[5983]: pgmap v243: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Oct 09 09:41:46 compute-2 sshd-session[43825]: Connection closed by 192.168.122.30 port 34000
Oct 09 09:41:46 compute-2 sshd-session[43822]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:41:46 compute-2 systemd-logind[800]: Session 30 logged out. Waiting for processes to exit.
Oct 09 09:41:46 compute-2 systemd[1]: session-30.scope: Deactivated successfully.
Oct 09 09:41:46 compute-2 systemd[1]: session-30.scope: Consumed 4.473s CPU time.
Oct 09 09:41:46 compute-2 systemd-logind[800]: Removed session 30.
Oct 09 09:41:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:46.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:47.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:48 compute-2 ceph-mon[5983]: pgmap v244: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 09 09:41:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:48.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:41:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:41:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:41:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:48 compute-2 sudo[44944]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:41:48 compute-2 sudo[44944]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:41:48 compute-2 sudo[44944]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:49.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:50 compute-2 ceph-mon[5983]: pgmap v245: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 09 09:41:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:41:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:50.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:51 compute-2 sshd-session[44971]: Accepted publickey for zuul from 192.168.122.30 port 36218 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:41:51 compute-2 systemd-logind[800]: New session 31 of user zuul.
Oct 09 09:41:51 compute-2 systemd[1]: Started Session 31 of User zuul.
Oct 09 09:41:51 compute-2 sshd-session[44971]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:41:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:41:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:51.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:41:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:51 compute-2 python3.9[45125]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:41:52 compute-2 ceph-mon[5983]: pgmap v246: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Oct 09 09:41:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000012s ======
Oct 09 09:41:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:52.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000012s
Oct 09 09:41:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:41:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:41:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:41:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:41:53 compute-2 sudo[45280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-javhhlkfxmrihsuzxoydgugkaiwpjvwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002912.852901-113-134500758541439/AnsiballZ_file.py'
Oct 09 09:41:53 compute-2 sudo[45280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:53 compute-2 python3.9[45282]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:41:53 compute-2 sudo[45280]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:53.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:53 compute-2 sudo[45433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sipgbbxpwdblrknpkkyfkzrqselgzdzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002913.419981-113-163847511991877/AnsiballZ_file.py'
Oct 09 09:41:53 compute-2 sudo[45433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:53 compute-2 python3.9[45435]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:41:53 compute-2 sudo[45433]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:54 compute-2 ceph-mon[5983]: pgmap v247: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Oct 09 09:41:54 compute-2 sudo[45586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usmhyeujeuxakfzalefoegieedjlmiif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002913.9431949-158-64012592389817/AnsiballZ_stat.py'
Oct 09 09:41:54 compute-2 sudo[45586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:41:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:54.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:41:54 compute-2 python3.9[45588]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:54 compute-2 sudo[45586]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:54 compute-2 sudo[45709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgmxomtrdbdphmmlzbmkorivazadbadi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002913.9431949-158-64012592389817/AnsiballZ_copy.py'
Oct 09 09:41:54 compute-2 sudo[45709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:54 compute-2 python3.9[45711]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002913.9431949-158-64012592389817/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=d3993d88f699999b71af21c3d560a684811602ca backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:54 compute-2 sudo[45709]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:55 compute-2 sudo[45861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knlhcxbvoymuzfbdeskbklfxbsyjtuza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002915.0606902-158-211409469481745/AnsiballZ_stat.py'
Oct 09 09:41:55 compute-2 sudo[45861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:55 compute-2 python3.9[45863]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:55 compute-2 sudo[45861]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:55.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:55 compute-2 sudo[45985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pksazntljnzsljxrzzqboxzbaztabxxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002915.0606902-158-211409469481745/AnsiballZ_copy.py'
Oct 09 09:41:55 compute-2 sudo[45985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:55 compute-2 python3.9[45987]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002915.0606902-158-211409469481745/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=7fbde074fa214bc5bd2f230fec0e2b862212f741 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:55 compute-2 sudo[45985]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:56 compute-2 ceph-mon[5983]: pgmap v248: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Oct 09 09:41:56 compute-2 sudo[46138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evjiwuuuzywrqjrctskkrnlymritsrcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002915.9632611-158-219496370782421/AnsiballZ_stat.py'
Oct 09 09:41:56 compute-2 sudo[46138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:56.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:56 compute-2 python3.9[46140]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:56 compute-2 sudo[46138]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:56 compute-2 sudo[46261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zputpuvqdhzcyhvwgxbotvcgdtalcgtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002915.9632611-158-219496370782421/AnsiballZ_copy.py'
Oct 09 09:41:56 compute-2 sudo[46261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:56 compute-2 python3.9[46263]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002915.9632611-158-219496370782421/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=38a926159765120eafd851814c946f414ec424b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:56 compute-2 sudo[46261]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:57 compute-2 sudo[46413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxrrqvqkaukiplndeliulcruscqjcyoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002916.9059644-300-96398027522013/AnsiballZ_file.py'
Oct 09 09:41:57 compute-2 sudo[46413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:57 compute-2 python3.9[46415]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:41:57 compute-2 sudo[46413]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:41:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:57.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:41:57 compute-2 sudo[46566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ildbjqxdcfpxngiqryywzqsgnxgpendi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002917.367052-300-215008870002714/AnsiballZ_file.py'
Oct 09 09:41:57 compute-2 sudo[46566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:57 compute-2 python3.9[46568]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:41:57 compute-2 sudo[46566]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:41:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:41:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:41:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:41:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:41:58 compute-2 sudo[46718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceugdfiwqjxsgqmzilmklxhqtnbguqoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002917.8569152-345-64477424480336/AnsiballZ_stat.py'
Oct 09 09:41:58 compute-2 sudo[46718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:58 compute-2 ceph-mon[5983]: pgmap v249: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 1 op/s
Oct 09 09:41:58 compute-2 python3.9[46720]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:58 compute-2 sudo[46718]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:41:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:41:58.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:41:58 compute-2 sudo[46842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nquvfbdljousqfjxnuizaejgeuponurd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002917.8569152-345-64477424480336/AnsiballZ_copy.py'
Oct 09 09:41:58 compute-2 sudo[46842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:58 compute-2 python3.9[46844]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002917.8569152-345-64477424480336/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=2fab77a7a903d443f6dce5fe29730068e168b602 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:58 compute-2 sudo[46842]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:58 compute-2 sudo[46994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-baivprcktybmsyrhyhsjewsqndyolaxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002918.7416267-345-57949908334541/AnsiballZ_stat.py'
Oct 09 09:41:58 compute-2 sudo[46994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:59 compute-2 python3.9[46996]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:59 compute-2 sudo[46994]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:59 compute-2 sudo[47117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kefjdbtnwtnhxcqfjlrywbcqeqlccktg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002918.7416267-345-57949908334541/AnsiballZ_copy.py'
Oct 09 09:41:59 compute-2 sudo[47117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:41:59 compute-2 python3.9[47119]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002918.7416267-345-57949908334541/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=40a9a855a5eba48419e934a92216fa818ce139fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:41:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:41:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:41:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:41:59.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:41:59 compute-2 sudo[47117]: pam_unix(sudo:session): session closed for user root
Oct 09 09:41:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:41:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:41:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:41:59 compute-2 sudo[47270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akcotkxjrlgqhgelftgsraywkvcreufd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002919.5785768-345-19621687433874/AnsiballZ_stat.py'
Oct 09 09:41:59 compute-2 sudo[47270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:41:59 compute-2 python3.9[47272]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:41:59 compute-2 sudo[47270]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:00 compute-2 ceph-mon[5983]: pgmap v250: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Oct 09 09:42:00 compute-2 sudo[47394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rizpsafjvakjwoioxuvyvhnzuhfmjndp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002919.5785768-345-19621687433874/AnsiballZ_copy.py'
Oct 09 09:42:00 compute-2 sudo[47394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:42:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:00.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:42:00 compute-2 python3.9[47396]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002919.5785768-345-19621687433874/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=6410d67b3938ea761bab7ec9350e6d4e3cd79110 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:00 compute-2 sudo[47394]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:00 compute-2 sudo[47546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tevxgloukkuugawgmgzecagwabgbfhgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002920.4970589-478-271501543451494/AnsiballZ_file.py'
Oct 09 09:42:00 compute-2 sudo[47546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:00 compute-2 python3.9[47548]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:00 compute-2 sudo[47546]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:01 compute-2 sudo[47698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkgghjecvmefrqyevpcyyhusmfprjnsf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002920.9648056-478-68675622580656/AnsiballZ_file.py'
Oct 09 09:42:01 compute-2 sudo[47698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:01 compute-2 python3.9[47700]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:01 compute-2 sudo[47698]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:01.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:01 compute-2 sudo[47851]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fphlrctuknvgzpbvfcftqcnzernbqvkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002921.4579563-524-222612344865110/AnsiballZ_stat.py'
Oct 09 09:42:01 compute-2 sudo[47851]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:01 compute-2 python3.9[47853]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:01 compute-2 sudo[47851]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:02 compute-2 sudo[47974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yanqesuxnxljujnkexwknbqbbirqvhrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002921.4579563-524-222612344865110/AnsiballZ_copy.py'
Oct 09 09:42:02 compute-2 sudo[47974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:02 compute-2 sudo[47977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:42:02 compute-2 sudo[47977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:42:02 compute-2 sudo[47977]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:02 compute-2 ceph-mon[5983]: pgmap v251: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 341 B/s wr, 1 op/s
Oct 09 09:42:02 compute-2 sudo[48002]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:42:02 compute-2 sudo[48002]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:42:02 compute-2 python3.9[47976]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002921.4579563-524-222612344865110/.source.crt _original_basename=compute-2.ctlplane.example.com-tls.crt follow=False checksum=980dd90580b4bebfd9eff0e377343cee4f9b8b85 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:02 compute-2 sudo[47974]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:02.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:02 compute-2 sudo[48002]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:02 compute-2 sudo[48206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-outczhrpfhjioeakfsbhgvuyxbwnpivi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002922.3096123-524-78572849494310/AnsiballZ_stat.py'
Oct 09 09:42:02 compute-2 sudo[48206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:02 compute-2 python3.9[48208]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:02 compute-2 sudo[48206]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:02 compute-2 sudo[48329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgpcqnfwnowdaduigqjcwkwlirydgooi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002922.3096123-524-78572849494310/AnsiballZ_copy.py'
Oct 09 09:42:02 compute-2 sudo[48329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:42:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:42:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:42:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:42:03 compute-2 python3.9[48331]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002922.3096123-524-78572849494310/.source.crt _original_basename=compute-2.ctlplane.example.com-ca.crt follow=False checksum=40a9a855a5eba48419e934a92216fa818ce139fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:03 compute-2 sudo[48329]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:03 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:42:03 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:42:03 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:42:03 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:42:03 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:42:03 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:42:03 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:42:03 compute-2 sudo[48481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rheaiaozyfgrweimtjadqrcbzgqwqqxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002923.1461365-524-118474943956971/AnsiballZ_stat.py'
Oct 09 09:42:03 compute-2 sudo[48481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:03.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:03 compute-2 python3.9[48483]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:03 compute-2 sudo[48481]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:03 compute-2 sudo[48605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwuichenhqgwtjrbkwgjqtriskciyjsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002923.1461365-524-118474943956971/AnsiballZ_copy.py'
Oct 09 09:42:03 compute-2 sudo[48605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:03 compute-2 python3.9[48607]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002923.1461365-524-118474943956971/.source.key _original_basename=compute-2.ctlplane.example.com-tls.key follow=False checksum=52d07288a3525b8d0f28767e1a1ccba8d4ceb4ed backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:03 compute-2 sudo[48605]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:04 compute-2 ceph-mon[5983]: pgmap v252: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 341 B/s wr, 1 op/s
Oct 09 09:42:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:04.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:04 compute-2 sudo[48758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxoctqwycpaoyikzqpwppqumgoaqxzwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002924.4474585-703-145339816443282/AnsiballZ_file.py'
Oct 09 09:42:04 compute-2 sudo[48758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:04 compute-2 python3.9[48760]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:04 compute-2 sudo[48758]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:42:05 compute-2 sudo[48910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuixfdlutjxijqxsqezcbeistpzfsqwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002924.9218068-727-162718140527839/AnsiballZ_stat.py'
Oct 09 09:42:05 compute-2 sudo[48910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:05 compute-2 python3.9[48912]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:05 compute-2 sudo[48910]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:05.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:05 compute-2 sudo[49034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjuxmjynmfxomodzyfmwjpniodworxvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002924.9218068-727-162718140527839/AnsiballZ_copy.py'
Oct 09 09:42:05 compute-2 sudo[49034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:05 compute-2 python3.9[49036]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002924.9218068-727-162718140527839/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:05 compute-2 sudo[49034]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:05 compute-2 sudo[49061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:42:05 compute-2 sudo[49061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:42:05 compute-2 sudo[49061]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:06 compute-2 sudo[49211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzflxtjvenwklssdbxwrxakeogfcdprp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002925.8478777-775-102497928473824/AnsiballZ_file.py'
Oct 09 09:42:06 compute-2 sudo[49211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:06 compute-2 ceph-mon[5983]: pgmap v253: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 09 09:42:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:42:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:42:06 compute-2 python3.9[49213]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:06 compute-2 sudo[49211]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct 09 09:42:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:06.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct 09 09:42:06 compute-2 sudo[49364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zakkahnlbevinjkyvrqwhojwzrbjuloi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002926.3450406-801-206082708221815/AnsiballZ_stat.py'
Oct 09 09:42:06 compute-2 sudo[49364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:06 compute-2 python3.9[49366]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:06 compute-2 sudo[49364]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:06 compute-2 sudo[49487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iomvizuwqnkgnntrhxziyzpqexobmpef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002926.3450406-801-206082708221815/AnsiballZ_copy.py'
Oct 09 09:42:06 compute-2 sudo[49487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:07 compute-2 python3.9[49489]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002926.3450406-801-206082708221815/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:07 compute-2 sudo[49487]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:07.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:07 compute-2 sudo[49639]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yokefkpevyplnlctcevzpjtzwjxuibev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002927.2965407-850-69423327164168/AnsiballZ_file.py'
Oct 09 09:42:07 compute-2 sudo[49639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:07 compute-2 python3.9[49642]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:07 compute-2 sudo[49639]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:07 compute-2 sudo[49792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hwgplutbuncdlhysjhyzdrvbevvjoxiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002927.7935505-877-107663192395817/AnsiballZ_stat.py'
Oct 09 09:42:07 compute-2 sudo[49792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:42:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:42:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:42:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:42:08 compute-2 ceph-mon[5983]: pgmap v254: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Oct 09 09:42:08 compute-2 python3.9[49794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:08 compute-2 sudo[49792]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:08.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:08 compute-2 sudo[49916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcatuadnqtpvuzkwwyszhrnriboksmrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002927.7935505-877-107663192395817/AnsiballZ_copy.py'
Oct 09 09:42:08 compute-2 sudo[49916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:08 compute-2 python3.9[49918]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002927.7935505-877-107663192395817/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:08 compute-2 sudo[49916]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:08 compute-2 sudo[50016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:42:08 compute-2 sudo[50016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:42:08 compute-2 sudo[50016]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:08 compute-2 sudo[50093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsciplohupbhdynqhicdpwenwvfkblwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002928.769593-926-197507014274890/AnsiballZ_file.py'
Oct 09 09:42:08 compute-2 sudo[50093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:09 compute-2 python3.9[50095]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:09 compute-2 sudo[50093]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:09.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:09 compute-2 sudo[50245]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emztxarrgdxdurjjettjcdszrhsskgoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002929.286838-952-21625768280456/AnsiballZ_stat.py'
Oct 09 09:42:09 compute-2 sudo[50245]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:09 compute-2 python3.9[50248]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:09 compute-2 sudo[50245]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:09 compute-2 sudo[50369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngraiwiedtfdbfxigrntromllsaudiop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002929.286838-952-21625768280456/AnsiballZ_copy.py'
Oct 09 09:42:09 compute-2 sudo[50369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:10 compute-2 python3.9[50371]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002929.286838-952-21625768280456/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:10 compute-2 ceph-mon[5983]: pgmap v255: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 09 09:42:10 compute-2 sudo[50369]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:42:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:10.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:42:10 compute-2 sudo[50522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxsgmyxkkkgwrccflzbqcxzuefrsiciy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002930.2701912-1002-235254460374597/AnsiballZ_file.py'
Oct 09 09:42:10 compute-2 sudo[50522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:10 compute-2 python3.9[50524]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:10 compute-2 sudo[50522]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:10 compute-2 sudo[50674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nymifgezafixfneiqgzhebbmsrydgwib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002930.7719016-1028-242452959543648/AnsiballZ_stat.py'
Oct 09 09:42:10 compute-2 sudo[50674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:11 compute-2 python3.9[50676]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:11 compute-2 sudo[50674]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:11 compute-2 sudo[50797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djprnbmpjegimkwnoluwfohhcblprfdy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002930.7719016-1028-242452959543648/AnsiballZ_copy.py'
Oct 09 09:42:11 compute-2 sudo[50797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:11.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:11 compute-2 python3.9[50799]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002930.7719016-1028-242452959543648/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:11 compute-2 sudo[50797]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:11 compute-2 sudo[50950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzprocaqomwwzvtrcxcvmdiopuganhro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002931.755068-1078-273397886038089/AnsiballZ_file.py'
Oct 09 09:42:11 compute-2 sudo[50950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:12 compute-2 ceph-mon[5983]: pgmap v256: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 09 09:42:12 compute-2 python3.9[50952]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:12 compute-2 sudo[50950]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:12.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:12 compute-2 sudo[51103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-llrsmyulkkehfhvzwtxpjjxxkkozjowv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002932.2727973-1102-231805370009622/AnsiballZ_stat.py'
Oct 09 09:42:12 compute-2 sudo[51103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:12 compute-2 python3.9[51105]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:12 compute-2 sudo[51103]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:12 compute-2 sudo[51226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldkcmddlvamewgatvhoxfitoxruaroxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002932.2727973-1102-231805370009622/AnsiballZ_copy.py'
Oct 09 09:42:12 compute-2 sudo[51226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:42:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:42:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:42:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:42:13 compute-2 python3.9[51228]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002932.2727973-1102-231805370009622/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=18663dce7579212939db4e772c3b048f7d3aa6f0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:13 compute-2 sudo[51226]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:13 compute-2 ceph-mon[5983]: pgmap v257: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:42:13 compute-2 sshd-session[44974]: Connection closed by 192.168.122.30 port 36218
Oct 09 09:42:13 compute-2 sshd-session[44971]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:42:13 compute-2 systemd[1]: session-31.scope: Deactivated successfully.
Oct 09 09:42:13 compute-2 systemd[1]: session-31.scope: Consumed 17.084s CPU time.
Oct 09 09:42:13 compute-2 systemd-logind[800]: Session 31 logged out. Waiting for processes to exit.
Oct 09 09:42:13 compute-2 systemd-logind[800]: Removed session 31.
Oct 09 09:42:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:13.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:14.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:15 compute-2 ceph-mon[5983]: pgmap v258: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 09 09:42:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:15.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:16.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:17 compute-2 ceph-mon[5983]: pgmap v259: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Oct 09 09:42:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:17.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:42:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:42:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:42:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:42:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:18.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:18 compute-2 sshd-session[51259]: Accepted publickey for zuul from 192.168.122.30 port 33354 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:42:18 compute-2 systemd-logind[800]: New session 32 of user zuul.
Oct 09 09:42:18 compute-2 systemd[1]: Started Session 32 of User zuul.
Oct 09 09:42:18 compute-2 sshd-session[51259]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:42:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:18 compute-2 sudo[51412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psetqhkxevjfhzcibrrnqcgsmganpfgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002938.6063244-28-23033017150673/AnsiballZ_file.py'
Oct 09 09:42:18 compute-2 sudo[51412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:19 compute-2 python3.9[51414]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:19 compute-2 sudo[51412]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:19 compute-2 ceph-mon[5983]: pgmap v260: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 09 09:42:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:19.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:19 compute-2 sudo[51565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kizqeyyxscmmpcckfotkxpyrfrzfawot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002939.2799282-64-136410422414060/AnsiballZ_stat.py'
Oct 09 09:42:19 compute-2 sudo[51565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:19 compute-2 python3.9[51567]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:19 compute-2 sudo[51565]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:20 compute-2 sudo[51688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncuvihvplmqixknjttxwbasnnaltgenu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002939.2799282-64-136410422414060/AnsiballZ_copy.py'
Oct 09 09:42:20 compute-2 sudo[51688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:20 compute-2 python3.9[51690]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002939.2799282-64-136410422414060/.source.conf _original_basename=ceph.conf follow=False checksum=8b7272e0630e6cb598e773121c6b56dda1c87bf8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:42:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:20.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:42:20 compute-2 sudo[51688]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:42:20 compute-2 sudo[51841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzvkfnoxxqjtdxurwtawuakpkbauchaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002940.4067373-64-194490064068930/AnsiballZ_stat.py'
Oct 09 09:42:20 compute-2 sudo[51841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:20 compute-2 python3.9[51843]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:20 compute-2 sudo[51841]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:21 compute-2 sudo[51964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iypeanvrhpvvbxnrcgblohjiljnujmsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002940.4067373-64-194490064068930/AnsiballZ_copy.py'
Oct 09 09:42:21 compute-2 sudo[51964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:21 compute-2 python3.9[51966]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002940.4067373-64-194490064068930/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=f2b8c5d3158b549e18e5631f97d7800b8ceae49e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:21 compute-2 sudo[51964]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:21 compute-2 ceph-mon[5983]: pgmap v261: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 09 09:42:21 compute-2 sshd-session[51262]: Connection closed by 192.168.122.30 port 33354
Oct 09 09:42:21 compute-2 sshd-session[51259]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:42:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:21.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:21 compute-2 systemd[1]: session-32.scope: Deactivated successfully.
Oct 09 09:42:21 compute-2 systemd[1]: session-32.scope: Consumed 2.089s CPU time.
Oct 09 09:42:21 compute-2 systemd-logind[800]: Session 32 logged out. Waiting for processes to exit.
Oct 09 09:42:21 compute-2 systemd-logind[800]: Removed session 32.
Oct 09 09:42:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:22.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:42:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:42:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:42:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:42:23 compute-2 ceph-mon[5983]: pgmap v262: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:42:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:23.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:24.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:25 compute-2 ceph-mon[5983]: pgmap v263: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 09 09:42:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:25.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:26.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:26 compute-2 sshd-session[51997]: Accepted publickey for zuul from 192.168.122.30 port 41078 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:42:26 compute-2 systemd[1]: Starting dnf makecache...
Oct 09 09:42:26 compute-2 systemd-logind[800]: New session 33 of user zuul.
Oct 09 09:42:26 compute-2 systemd[1]: Started Session 33 of User zuul.
Oct 09 09:42:26 compute-2 sshd-session[51997]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:42:26 compute-2 dnf[51999]: Metadata cache refreshed recently.
Oct 09 09:42:26 compute-2 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct 09 09:42:26 compute-2 systemd[1]: Finished dnf makecache.
Oct 09 09:42:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:42:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:42:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:42:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:42:27 compute-2 python3.9[52151]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:42:27 compute-2 ceph-mon[5983]: pgmap v264: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Oct 09 09:42:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:27.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [WARNING] 281/094227 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Oct 09 09:42:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-nfs-cephfs-compute-2-iyubhq[17435]: [ALERT] 281/094227 (4) : backend 'backend' has no server available!
Oct 09 09:42:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:28 compute-2 sudo[52306]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oweafjgkioianyhypjqkbxyhqmgkebhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002947.6955924-64-12741316484088/AnsiballZ_file.py'
Oct 09 09:42:28 compute-2 sudo[52306]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:28 compute-2 python3.9[52308]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:28 compute-2 sudo[52306]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:28.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:28 compute-2 sudo[52459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucuiexkzucesdyxmeefubtghozrinvpi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002948.3196926-64-263137868540525/AnsiballZ_file.py'
Oct 09 09:42:28 compute-2 sudo[52459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:28 compute-2 python3.9[52461]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:28 compute-2 sudo[52459]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:28 compute-2 sudo[52509]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:42:28 compute-2 sudo[52509]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:42:28 compute-2 sudo[52509]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:29 compute-2 python3.9[52636]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:42:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:29 compute-2 ceph-mon[5983]: pgmap v265: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 09 09:42:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:29.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:29 compute-2 sudo[52787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amlemjvsyrhccszqrebgmrswryjhsirm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002949.49518-133-190954888850066/AnsiballZ_seboolean.py'
Oct 09 09:42:29 compute-2 sudo[52787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:29 compute-2 python3.9[52789]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 09 09:42:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:30.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:30 compute-2 sudo[52787]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:31 compute-2 sudo[52947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syfdadtxtfvdegkrgngbdlcpssdzazvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002951.177643-163-56286128878701/AnsiballZ_setup.py'
Oct 09 09:42:31 compute-2 dbus-broker-launch[792]: avc:  op=load_policy lsm=selinux seqno=2 res=1
Oct 09 09:42:31 compute-2 sudo[52947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:31 compute-2 ceph-mon[5983]: pgmap v266: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 0 op/s
Oct 09 09:42:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:31.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:31 compute-2 python3.9[52949]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:42:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:31 compute-2 sudo[52947]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:42:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:42:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:42:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:42:32 compute-2 sudo[53032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weqwvuiziogvshysfxldxvhuvcoizlrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002951.177643-163-56286128878701/AnsiballZ_dnf.py'
Oct 09 09:42:32 compute-2 sudo[53032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:32 compute-2 python3.9[53034]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:42:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:32.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:33 compute-2 sudo[53032]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:33.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:33 compute-2 ceph-mon[5983]: pgmap v267: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 0 op/s
Oct 09 09:42:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:33 compute-2 sudo[53187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmelkdsckqincxvcycowniidvmyojfrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002953.3691156-199-57887066903210/AnsiballZ_systemd.py'
Oct 09 09:42:33 compute-2 sudo[53187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:42:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:42:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:42:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:42:34 compute-2 python3.9[53189]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 09:42:34 compute-2 sudo[53187]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:34.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:34 compute-2 sudo[53343]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhrfazmjjxayzfuyuohklnlypkaseyha ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760002954.2503846-223-72615735609240/AnsiballZ_edpm_nftables_snippet.py'
Oct 09 09:42:34 compute-2 sudo[53343]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:34 compute-2 python3[53345]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct 09 09:42:34 compute-2 sudo[53343]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:35 compute-2 sudo[53495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jbynfumzquwwgqdmjajsjdpgtowmowpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002954.9337103-250-106359784021146/AnsiballZ_file.py'
Oct 09 09:42:35 compute-2 sudo[53495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:35 compute-2 python3.9[53497]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:35 compute-2 sudo[53495]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:35.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:35 compute-2 ceph-mon[5983]: pgmap v268: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Oct 09 09:42:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:42:35 compute-2 sudo[53648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttqmnlifgjshgtqhyddxzdfjgoyyrmkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002955.420983-274-55159581781760/AnsiballZ_stat.py'
Oct 09 09:42:35 compute-2 sudo[53648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:35 compute-2 python3.9[53650]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:35 compute-2 sudo[53648]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:36 compute-2 sudo[53726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmyrrlhlacrogytdcuqpotasjsxuwskr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002955.420983-274-55159581781760/AnsiballZ_file.py'
Oct 09 09:42:36 compute-2 sudo[53726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:36 compute-2 python3.9[53728]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:36 compute-2 sudo[53726]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:36.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:36 compute-2 sudo[53879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkfxhqexftkcleenfawnxwnerhbmrobj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002956.3726346-310-16173417806526/AnsiballZ_stat.py'
Oct 09 09:42:36 compute-2 sudo[53879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:36 compute-2 python3.9[53881]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:36 compute-2 sudo[53879]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:42:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:42:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:42:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:42:36 compute-2 sudo[53957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlnkftnnhiscjytxygxrlwqvnqpuzecu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002956.3726346-310-16173417806526/AnsiballZ_file.py'
Oct 09 09:42:36 compute-2 sudo[53957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:37 compute-2 python3.9[53959]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.hbg46kms recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:37 compute-2 sudo[53957]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:37 compute-2 sudo[54109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idsialoqevpzqadmxyuwwfskuxhbnayo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002957.2188952-347-173359723930022/AnsiballZ_stat.py'
Oct 09 09:42:37 compute-2 sudo[54109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:37.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:37 compute-2 ceph-mon[5983]: pgmap v269: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s
Oct 09 09:42:37 compute-2 python3.9[54111]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:37 compute-2 sudo[54109]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:37 compute-2 sudo[54188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzzmoeaknplqhtkgzokfdeguyezenxuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002957.2188952-347-173359723930022/AnsiballZ_file.py'
Oct 09 09:42:37 compute-2 sudo[54188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:37 compute-2 python3.9[54190]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:37 compute-2 sudo[54188]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:38.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:38 compute-2 sudo[54341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmjszsgyhqwxatdgbfcevshqjtalkrgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002958.1414008-385-70321006899539/AnsiballZ_command.py'
Oct 09 09:42:38 compute-2 sudo[54341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:38 compute-2 python3.9[54343]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:42:38 compute-2 sudo[54341]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:39 compute-2 sudo[54494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vibfjfsnbqeacqgjyslaefflqfpaqugx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760002958.7813506-409-281005776748472/AnsiballZ_edpm_nftables_from_files.py'
Oct 09 09:42:39 compute-2 sudo[54494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:39 compute-2 python3[54496]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 09 09:42:39 compute-2 sudo[54494]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:39.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:39 compute-2 ceph-mon[5983]: pgmap v270: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 85 B/s wr, 1 op/s
Oct 09 09:42:39 compute-2 sudo[54647]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-enrlixscuxboraeeatzuidjbujokoxqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002959.3904185-433-33693085918159/AnsiballZ_stat.py'
Oct 09 09:42:39 compute-2 sudo[54647]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:39 compute-2 python3.9[54649]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:39 compute-2 sudo[54647]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:40 compute-2 sudo[54773]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcdpilwrtfmcjsfcajkialkxmacputyc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002959.3904185-433-33693085918159/AnsiballZ_copy.py'
Oct 09 09:42:40 compute-2 sudo[54773]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:40.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:40 compute-2 python3.9[54775]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002959.3904185-433-33693085918159/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:40 compute-2 sudo[54773]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:40 compute-2 sudo[54925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-borrcjsljshxitjhuqibmnyaqhagrybp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002960.508045-478-137696161991167/AnsiballZ_stat.py'
Oct 09 09:42:40 compute-2 sudo[54925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:40 compute-2 python3.9[54927]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:40 compute-2 sudo[54925]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:42:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:42:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:42:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:42:41 compute-2 sudo[55050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvmkeboftuvmrizvipecdeemfqzqzsyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002960.508045-478-137696161991167/AnsiballZ_copy.py'
Oct 09 09:42:41 compute-2 sudo[55050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:41 compute-2 python3.9[55052]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002960.508045-478-137696161991167/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:41 compute-2 sudo[55050]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:41.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:41 compute-2 ceph-mon[5983]: pgmap v271: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 85 B/s wr, 1 op/s
Oct 09 09:42:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:41 compute-2 sudo[55203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzlwvqkxkqneuerealrzrjsyypbslngi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002961.6267846-523-140967059245984/AnsiballZ_stat.py'
Oct 09 09:42:41 compute-2 sudo[55203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:41 compute-2 python3.9[55205]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:42 compute-2 sudo[55203]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:42 compute-2 sudo[55329]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvnxfbagcdqoqjovvlwjszqpdnofqqwj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002961.6267846-523-140967059245984/AnsiballZ_copy.py'
Oct 09 09:42:42 compute-2 sudo[55329]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:42:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:42.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:42:42 compute-2 python3.9[55331]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002961.6267846-523-140967059245984/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:42 compute-2 sudo[55329]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:42 compute-2 sudo[55481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laqfoisioampowzvtcsgunxmrkcxkfzj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002962.576975-568-14409729099319/AnsiballZ_stat.py'
Oct 09 09:42:42 compute-2 sudo[55481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:42 compute-2 python3.9[55483]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:42 compute-2 sudo[55481]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:43 compute-2 sudo[55606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyeiqaaljwsaycldhvedeucmrrddsqmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002962.576975-568-14409729099319/AnsiballZ_copy.py'
Oct 09 09:42:43 compute-2 sudo[55606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:43 compute-2 python3.9[55608]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002962.576975-568-14409729099319/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:43 compute-2 sudo[55606]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:43.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:43 compute-2 ceph-mon[5983]: pgmap v272: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 170 B/s wr, 2 op/s
Oct 09 09:42:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:43 compute-2 sudo[55759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prkttvliyxuozwxwwnglfbgcimwsycwn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002963.5476701-613-9988118755186/AnsiballZ_stat.py'
Oct 09 09:42:43 compute-2 sudo[55759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:43 compute-2 python3.9[55761]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:43 compute-2 sudo[55759]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:44 compute-2 sudo[55885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htlrfcquqfphrfcodouwroyctphhtnjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002963.5476701-613-9988118755186/AnsiballZ_copy.py'
Oct 09 09:42:44 compute-2 sudo[55885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:44.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:44 compute-2 python3.9[55887]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760002963.5476701-613-9988118755186/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:44 compute-2 sudo[55885]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:44 compute-2 sudo[56037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izjkahighqzvucjzkqwkyxlhfqohbdit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002964.7647874-658-64170296061170/AnsiballZ_file.py'
Oct 09 09:42:44 compute-2 sudo[56037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:45 compute-2 python3.9[56039]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:45 compute-2 sudo[56037]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:45 compute-2 sudo[56189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nclmzinkvnuevsqrlumkwnunjrvobiop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002965.2722108-682-19438719708831/AnsiballZ_command.py'
Oct 09 09:42:45 compute-2 sudo[56189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:42:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:45.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:42:45 compute-2 ceph-mon[5983]: pgmap v273: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 170 B/s wr, 1 op/s
Oct 09 09:42:45 compute-2 python3.9[56191]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:42:45 compute-2 sudo[56189]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:42:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:42:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:42:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:42:46 compute-2 sudo[56346]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nawmigkbymhjsgquoaopgwghggflfpsu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002965.839288-707-271305722188329/AnsiballZ_blockinfile.py'
Oct 09 09:42:46 compute-2 sudo[56346]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:42:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:46.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:42:46 compute-2 python3.9[56348]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:46 compute-2 sudo[56346]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:46 compute-2 sudo[56498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpyshpzfocwjwmwvnkfmvizjzcekycvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002966.5942526-734-226726089316991/AnsiballZ_command.py'
Oct 09 09:42:46 compute-2 sudo[56498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:46 compute-2 python3.9[56500]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:42:46 compute-2 sudo[56498]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:47 compute-2 sudo[56651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukzaigiwomvbgsmclgnjuidodmnvfnyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002967.1436028-757-73829241642324/AnsiballZ_stat.py'
Oct 09 09:42:47 compute-2 sudo[56651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:47.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:47 compute-2 ceph-mon[5983]: pgmap v274: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 170 B/s wr, 2 op/s
Oct 09 09:42:47 compute-2 python3.9[56653]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:42:47 compute-2 sudo[56651]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:47 compute-2 sudo[56806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grrhuuiekxbljcqewpoxogkhbsjydhtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002967.7383997-781-184925447935485/AnsiballZ_command.py'
Oct 09 09:42:47 compute-2 sudo[56806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:48 compute-2 python3.9[56808]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:42:48 compute-2 sudo[56806]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:48.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:48 compute-2 sudo[56962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goyjlqxkbzztvxmhrifngzpmjrgidzya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002968.2951653-805-187955163116166/AnsiballZ_file.py'
Oct 09 09:42:48 compute-2 sudo[56962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:48 compute-2 python3.9[56964]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:48 compute-2 sudo[56962]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:48 compute-2 sudo[56989]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:42:48 compute-2 sudo[56989]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:42:49 compute-2 sudo[56989]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:49.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:49 compute-2 ceph-mon[5983]: pgmap v275: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s
Oct 09 09:42:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:42:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:49 compute-2 python3.9[57140]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:42:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:50.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:50 compute-2 sudo[57292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jngvgirjsncanreapfifqadblhrpsziu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002970.4083612-925-2965977524406/AnsiballZ_command.py'
Oct 09 09:42:50 compute-2 sudo[57292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:50 compute-2 python3.9[57294]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-2.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:2e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:42:50 compute-2 ovs-vsctl[57295]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-2.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:2e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.102 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct 09 09:42:50 compute-2 sudo[57292]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:42:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:42:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:42:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:42:51 compute-2 sudo[57445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kloshhcyolvtfyfkyryguqwfwrgudwaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002971.015812-953-152121145471540/AnsiballZ_command.py'
Oct 09 09:42:51 compute-2 sudo[57445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:51 compute-2 python3.9[57447]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:42:51 compute-2 sudo[57445]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:42:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:51.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:42:51 compute-2 ceph-mon[5983]: pgmap v276: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 85 B/s wr, 1 op/s
Oct 09 09:42:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:51 compute-2 sudo[57601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gulqtcxipmcvknunfligdzdyoziampte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002971.5574236-976-100937282048760/AnsiballZ_command.py'
Oct 09 09:42:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:51 compute-2 sudo[57601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:51 compute-2 python3.9[57603]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:42:51 compute-2 ovs-vsctl[57604]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct 09 09:42:51 compute-2 sudo[57601]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:42:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:52.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:42:52 compute-2 python3.9[57755]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:42:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:52 compute-2 sudo[57907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehaaletsdihebtzxoklqvlpuxgotopde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002972.6519928-1028-79504675771095/AnsiballZ_file.py'
Oct 09 09:42:52 compute-2 sudo[57907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:52 compute-2 python3.9[57909]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:53 compute-2 sudo[57907]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:53 compute-2 sudo[58059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luwawzuivobyagtqxbiqqdnmqpppxkwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002973.1505432-1051-271300330372981/AnsiballZ_stat.py'
Oct 09 09:42:53 compute-2 sudo[58059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:53 compute-2 python3.9[58061]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:53.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:53 compute-2 sudo[58059]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:53 compute-2 ceph-mon[5983]: pgmap v277: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 85 B/s wr, 1 op/s
Oct 09 09:42:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:53 compute-2 sudo[58138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrdeiteqaqgbmwcugzsgefwghnonglcn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002973.1505432-1051-271300330372981/AnsiballZ_file.py'
Oct 09 09:42:53 compute-2 sudo[58138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:53 compute-2 python3.9[58140]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:54 compute-2 sudo[58138]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:54 compute-2 sudo[58291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vodgjthultgnzgeatwgelnoampgbbzxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002974.1036937-1051-141099224873835/AnsiballZ_stat.py'
Oct 09 09:42:54 compute-2 sudo[58291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:42:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:54.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:42:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:54 compute-2 python3.9[58293]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:54 compute-2 sudo[58291]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:54 compute-2 sudo[58369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwqqmcxxoycngzkbfrfnokxkjcecxqci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002974.1036937-1051-141099224873835/AnsiballZ_file.py'
Oct 09 09:42:54 compute-2 sudo[58369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:54 compute-2 python3.9[58371]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:42:54 compute-2 sudo[58369]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:55 compute-2 sudo[58521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upmfghgwztzgwpufgjvmaksmcieawpvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002974.96032-1121-71270502722954/AnsiballZ_file.py'
Oct 09 09:42:55 compute-2 sudo[58521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:55 compute-2 python3.9[58523]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:55 compute-2 sudo[58521]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:55.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:55 compute-2 ceph-mon[5983]: pgmap v278: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:42:55 compute-2 sudo[58674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlsdymzyfypebdujthhptsziuuufazsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002975.5012918-1144-177781831955264/AnsiballZ_stat.py'
Oct 09 09:42:55 compute-2 sudo[58674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:55 compute-2 python3.9[58676]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:55 compute-2 sudo[58674]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:42:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:42:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:42:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:42:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:42:56 compute-2 sudo[58752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzmexhgrqldojwdsnnqtnmyobketelea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002975.5012918-1144-177781831955264/AnsiballZ_file.py'
Oct 09 09:42:56 compute-2 sudo[58752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:56 compute-2 python3.9[58754]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:56 compute-2 sudo[58752]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:56.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:56 compute-2 sudo[58905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpizfcoptmmpyajtdsgwawfczxcgkvsw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002976.383825-1180-180295897107543/AnsiballZ_stat.py'
Oct 09 09:42:56 compute-2 sudo[58905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:56 compute-2 python3.9[58907]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:56 compute-2 sudo[58905]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:56 compute-2 sudo[58983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-koowmyicjlgypurtgydjmnlrpgtofpmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002976.383825-1180-180295897107543/AnsiballZ_file.py'
Oct 09 09:42:56 compute-2 sudo[58983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:57 compute-2 python3.9[58985]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:57 compute-2 sudo[58983]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:57 compute-2 sudo[59135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzadbxvtmuludsqohfskaqujitejhtwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002977.231829-1216-87539007853842/AnsiballZ_systemd.py'
Oct 09 09:42:57 compute-2 sudo[59135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:57.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:57 compute-2 ceph-mon[5983]: pgmap v279: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:42:57 compute-2 python3.9[59137]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:42:57 compute-2 systemd[1]: Reloading.
Oct 09 09:42:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:57 compute-2 systemd-rc-local-generator[59158]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:42:57 compute-2 systemd-sysv-generator[59162]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:42:57 compute-2 sudo[59135]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:58 compute-2 sudo[59325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hltuemfuoqwurxvuppyheqqukfrezcie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002978.1070411-1240-21284008166627/AnsiballZ_stat.py'
Oct 09 09:42:58 compute-2 sudo[59325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:42:58.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:58 compute-2 python3.9[59327]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:58 compute-2 sudo[59325]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:58 compute-2 sudo[59403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plhclibafamdkrzmrlvfnzasvzhvvybc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002978.1070411-1240-21284008166627/AnsiballZ_file.py'
Oct 09 09:42:58 compute-2 sudo[59403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:58 compute-2 python3.9[59405]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:58 compute-2 sudo[59403]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:59 compute-2 sudo[59555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfqwgydgrxzflsglxvociphmwfspcqrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002978.9304466-1276-102550428025128/AnsiballZ_stat.py'
Oct 09 09:42:59 compute-2 sudo[59555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:59 compute-2 python3.9[59557]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:42:59 compute-2 sudo[59555]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:59 compute-2 sudo[59633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgxgniswtytrwajggobkihlpzejczexs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002978.9304466-1276-102550428025128/AnsiballZ_file.py'
Oct 09 09:42:59 compute-2 sudo[59633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:42:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:42:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:42:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:42:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:42:59.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:42:59 compute-2 ceph-mon[5983]: pgmap v280: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:42:59 compute-2 python3.9[59635]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:42:59 compute-2 sudo[59633]: pam_unix(sudo:session): session closed for user root
Oct 09 09:42:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:42:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:42:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:42:59 compute-2 sudo[59786]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftzhkbycxyexponjpysiudonqbpgyvfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002979.7440467-1312-215616266256440/AnsiballZ_systemd.py'
Oct 09 09:42:59 compute-2 sudo[59786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:00 compute-2 python3.9[59788]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:43:00 compute-2 systemd[1]: Reloading.
Oct 09 09:43:00 compute-2 systemd-rc-local-generator[59811]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:43:00 compute-2 systemd-sysv-generator[59814]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:43:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:00.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:00 compute-2 systemd[1]: Starting Create netns directory...
Oct 09 09:43:00 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 09 09:43:00 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 09 09:43:00 compute-2 systemd[1]: Finished Create netns directory.
Oct 09 09:43:00 compute-2 sudo[59786]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:00 compute-2 sudo[59980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igiibpznnfrwqxqydmgzxogeqcynjfiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002980.7397711-1342-202599932394454/AnsiballZ_file.py'
Oct 09 09:43:00 compute-2 sudo[59980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:43:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:43:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:43:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:43:01 compute-2 python3.9[59982]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:01 compute-2 sudo[59980]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:01 compute-2 sudo[60132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axtrksdeililkjxkqtduudskbsclfasl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002981.2392247-1366-221275307511219/AnsiballZ_stat.py'
Oct 09 09:43:01 compute-2 sudo[60132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:01.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:01 compute-2 python3.9[60134]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:01 compute-2 ceph-mon[5983]: pgmap v281: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:01 compute-2 sudo[60132]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:01 compute-2 sudo[60256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kajmmwxxcyyddhcczkeoghbcrkvpmzfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002981.2392247-1366-221275307511219/AnsiballZ_copy.py'
Oct 09 09:43:01 compute-2 sudo[60256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:01 compute-2 python3.9[60258]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760002981.2392247-1366-221275307511219/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:02 compute-2 sudo[60256]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:02.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:02 compute-2 sudo[60409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lovqyokknwjtmarmuohfukmapchxsiee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002982.5228598-1417-78893334755989/AnsiballZ_file.py'
Oct 09 09:43:02 compute-2 sudo[60409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:02 compute-2 python3.9[60411]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:02 compute-2 sudo[60409]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:03 compute-2 sudo[60561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osgplmurwxwmpkpvbivrhdrvqymzoxlg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002983.0441544-1441-29288235349591/AnsiballZ_stat.py'
Oct 09 09:43:03 compute-2 sudo[60561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:03 compute-2 python3.9[60563]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:03 compute-2 sudo[60561]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:03.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:03 compute-2 ceph-mon[5983]: pgmap v282: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:03 compute-2 sudo[60685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sadpygeheybukfwesbnnfoplozelplic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002983.0441544-1441-29288235349591/AnsiballZ_copy.py'
Oct 09 09:43:03 compute-2 sudo[60685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:03 compute-2 python3.9[60687]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760002983.0441544-1441-29288235349591/.source.json _original_basename=.91twzejg follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:03 compute-2 sudo[60685]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:04 compute-2 sudo[60837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwzpybhteusmaafxcxjvhlscctitwrzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002983.939631-1487-180380825495652/AnsiballZ_file.py'
Oct 09 09:43:04 compute-2 sudo[60837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:04 compute-2 python3.9[60839]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:04 compute-2 sudo[60837]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct 09 09:43:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:04.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct 09 09:43:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:43:04 compute-2 sudo[60990]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jaaofyncelosotpvoxwzlpsuqhqnktln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002984.4528468-1510-131396794060705/AnsiballZ_stat.py'
Oct 09 09:43:04 compute-2 sudo[60990]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:04 compute-2 sudo[60990]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:05 compute-2 sudo[61113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqhyboduiyrmnjbjbzgbxpqqqumalavf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002984.4528468-1510-131396794060705/AnsiballZ_copy.py'
Oct 09 09:43:05 compute-2 sudo[61113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:05 compute-2 sudo[61113]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:05.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:05 compute-2 ceph-mon[5983]: pgmap v283: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:05 compute-2 sudo[61266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxyldkodcjphylotcpyrlnhcuhicbwmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002985.4626296-1561-209546652085810/AnsiballZ_container_config_data.py'
Oct 09 09:43:05 compute-2 sudo[61266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:05 compute-2 sudo[61269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:43:05 compute-2 sudo[61269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:43:05 compute-2 sudo[61269]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:05 compute-2 python3.9[61268]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct 09 09:43:05 compute-2 sudo[61266]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:05 compute-2 sudo[61294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:43:05 compute-2 sudo[61294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:43:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:43:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:43:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:43:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:43:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct 09 09:43:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:06.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct 09 09:43:06 compute-2 sudo[61294]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:43:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:43:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:43:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:43:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:43:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:43:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:43:06 compute-2 sudo[61498]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxasexupkgcbhyttwbgsvinebgmshoiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002986.1573532-1588-246777092340275/AnsiballZ_container_config_hash.py'
Oct 09 09:43:06 compute-2 sudo[61498]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:06 compute-2 python3.9[61500]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 09 09:43:06 compute-2 sudo[61498]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:07 compute-2 sudo[61650]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nigjwgvceoaorjvgfgsadichqpyxfoth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002987.0391984-1615-4207521538187/AnsiballZ_podman_container_info.py'
Oct 09 09:43:07 compute-2 sudo[61650]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:07 compute-2 python3.9[61652]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 09 09:43:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:07.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:07 compute-2 sudo[61650]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:07 compute-2 ceph-mon[5983]: pgmap v284: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct 09 09:43:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:08.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct 09 09:43:08 compute-2 sudo[61823]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmrrrevetedroglwbxrejhxpjtxcpxwj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760002988.3575-1654-251471797911962/AnsiballZ_edpm_container_manage.py'
Oct 09 09:43:08 compute-2 sudo[61823]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:08 compute-2 python3[61825]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 09 09:43:09 compute-2 sudo[61846]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:43:09 compute-2 sudo[61846]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:43:09 compute-2 sudo[61846]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:09.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:09 compute-2 ceph-mon[5983]: pgmap v285: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:09 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:43:09 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:43:09 compute-2 sudo[61872]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:43:09 compute-2 sudo[61872]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:43:09 compute-2 sudo[61872]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:10.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:43:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:43:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:43:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #22. Immutable memtables: 0.
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.123781) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 22
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991123825, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2315, "num_deletes": 250, "total_data_size": 6190411, "memory_usage": 6285112, "flush_reason": "Manual Compaction"}
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #23: started
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991130058, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 23, "file_size": 2417462, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10628, "largest_seqno": 12937, "table_properties": {"data_size": 2410776, "index_size": 3500, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2117, "raw_key_size": 17020, "raw_average_key_size": 20, "raw_value_size": 2395934, "raw_average_value_size": 2852, "num_data_blocks": 156, "num_entries": 840, "num_filter_entries": 840, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002784, "oldest_key_time": 1760002784, "file_creation_time": 1760002991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 6325 microseconds, and 4379 cpu microseconds.
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.130110) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #23: 2417462 bytes OK
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.130129) [db/memtable_list.cc:519] [default] Level-0 commit table #23 started
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.130499) [db/memtable_list.cc:722] [default] Level-0 commit table #23: memtable #1 done
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.130513) EVENT_LOG_v1 {"time_micros": 1760002991130510, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.130527) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 6180268, prev total WAL file size 6180268, number of live WAL files 2.
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000019.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.131968) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [23(2360KB)], [21(13MB)]
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991132158, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [23], "files_L6": [21], "score": -1, "input_data_size": 16869059, "oldest_snapshot_seqno": -1}
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #24: 4397 keys, 14823754 bytes, temperature: kUnknown
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991172105, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 24, "file_size": 14823754, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14789596, "index_size": 22080, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11013, "raw_key_size": 110454, "raw_average_key_size": 25, "raw_value_size": 14704682, "raw_average_value_size": 3344, "num_data_blocks": 954, "num_entries": 4397, "num_filter_entries": 4397, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760002991, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 24, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.172449) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14823754 bytes
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.172944) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 422.0 rd, 370.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 13.8 +0.0 blob) out(14.1 +0.0 blob), read-write-amplify(13.1) write-amplify(6.1) OK, records in: 4818, records dropped: 421 output_compression: NoCompression
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.172959) EVENT_LOG_v1 {"time_micros": 1760002991172952, "job": 10, "event": "compaction_finished", "compaction_time_micros": 39975, "compaction_time_cpu_micros": 20579, "output_level": 6, "num_output_files": 1, "total_output_size": 14823754, "num_input_records": 4818, "num_output_records": 4397, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991173449, "job": 10, "event": "table_file_deletion", "file_number": 23}
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000021.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002991175029, "job": 10, "event": "table_file_deletion", "file_number": 21}
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.131827) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.175109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.175113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.175114) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.175115) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:11.175117) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct 09 09:43:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:11.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct 09 09:43:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:12 compute-2 ceph-mon[5983]: pgmap v286: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:12.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:13 compute-2 ceph-mon[5983]: pgmap v287: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:13.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:14 compute-2 podman[61836]: 2025-10-09 09:43:14.106464503 +0000 UTC m=+5.106289766 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 09 09:43:14 compute-2 podman[61990]: 2025-10-09 09:43:14.197594018 +0000 UTC m=+0.028537519 container create 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Oct 09 09:43:14 compute-2 podman[61990]: 2025-10-09 09:43:14.184574214 +0000 UTC m=+0.015517725 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 09 09:43:14 compute-2 python3[61825]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct 09 09:43:14 compute-2 sudo[61823]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:14.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #25. Immutable memtables: 0.
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.434529) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 25
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994434554, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 290, "num_deletes": 251, "total_data_size": 122966, "memory_usage": 129416, "flush_reason": "Manual Compaction"}
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #26: started
Oct 09 09:43:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994436695, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 26, "file_size": 80942, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12938, "largest_seqno": 13227, "table_properties": {"data_size": 79032, "index_size": 138, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 4703, "raw_average_key_size": 17, "raw_value_size": 75309, "raw_average_value_size": 278, "num_data_blocks": 6, "num_entries": 270, "num_filter_entries": 270, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002991, "oldest_key_time": 1760002991, "file_creation_time": 1760002994, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 2428 microseconds, and 584 cpu microseconds.
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.436718) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #26: 80942 bytes OK
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.436974) [db/memtable_list.cc:519] [default] Level-0 commit table #26 started
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.437384) [db/memtable_list.cc:722] [default] Level-0 commit table #26: memtable #1 done
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.437396) EVENT_LOG_v1 {"time_micros": 1760002994437393, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.437403) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 120817, prev total WAL file size 120817, number of live WAL files 2.
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000022.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.437892) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [26(79KB)], [24(14MB)]
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994437930, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [26], "files_L6": [24], "score": -1, "input_data_size": 14904696, "oldest_snapshot_seqno": -1}
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #27: 4157 keys, 11557943 bytes, temperature: kUnknown
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994468811, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 27, "file_size": 11557943, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11527062, "index_size": 19379, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 106405, "raw_average_key_size": 25, "raw_value_size": 11448005, "raw_average_value_size": 2753, "num_data_blocks": 828, "num_entries": 4157, "num_filter_entries": 4157, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760002994, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 27, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.468958) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 11557943 bytes
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.469319) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 481.9 rd, 373.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 14.1 +0.0 blob) out(11.0 +0.0 blob), read-write-amplify(326.9) write-amplify(142.8) OK, records in: 4667, records dropped: 510 output_compression: NoCompression
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.469333) EVENT_LOG_v1 {"time_micros": 1760002994469326, "job": 12, "event": "compaction_finished", "compaction_time_micros": 30932, "compaction_time_cpu_micros": 17090, "output_level": 6, "num_output_files": 1, "total_output_size": 11557943, "num_input_records": 4667, "num_output_records": 4157, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994469427, "job": 12, "event": "table_file_deletion", "file_number": 26}
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000024.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760002994470949, "job": 12, "event": "table_file_deletion", "file_number": 24}
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.437854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.471074) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.471079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.471080) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.471081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:14 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:43:14.471082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:43:14 compute-2 sudo[62168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imafqzhlsgksinjkdykpmgmkulccnjaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002994.4219935-1678-86259468774148/AnsiballZ_stat.py'
Oct 09 09:43:14 compute-2 sudo[62168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:14 compute-2 python3.9[62170]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:43:14 compute-2 sudo[62168]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:15 compute-2 sudo[62322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-danghwaihdxpauifgesyelisewsualtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002995.081532-1705-80629814026262/AnsiballZ_file.py'
Oct 09 09:43:15 compute-2 sudo[62322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:15 compute-2 python3.9[62324]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:15 compute-2 sudo[62322]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:15 compute-2 ceph-mon[5983]: pgmap v288: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:15.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:15 compute-2 sudo[62399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpvisrdiozvgyzdajkhoifjybqqxofdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002995.081532-1705-80629814026262/AnsiballZ_stat.py'
Oct 09 09:43:15 compute-2 sudo[62399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:15 compute-2 python3.9[62401]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:43:15 compute-2 sudo[62399]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:43:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:43:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:43:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:43:16 compute-2 sudo[62550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmvmdjcgccgdtrkojbqinmtquxeldtdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002995.8268647-1705-134547807455674/AnsiballZ_copy.py'
Oct 09 09:43:16 compute-2 sudo[62550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:16 compute-2 python3.9[62553]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760002995.8268647-1705-134547807455674/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:16 compute-2 sudo[62550]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:16.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:16 compute-2 sudo[62627]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzqhvkgvmgjndzdqzpgouoqncfvqmwkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002995.8268647-1705-134547807455674/AnsiballZ_systemd.py'
Oct 09 09:43:16 compute-2 sudo[62627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:16 compute-2 python3.9[62629]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 09:43:16 compute-2 systemd[1]: Reloading.
Oct 09 09:43:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:16 compute-2 systemd-rc-local-generator[62650]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:43:16 compute-2 systemd-sysv-generator[62653]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:43:16 compute-2 sudo[62627]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:17 compute-2 sudo[62737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcjvfegoimjvadtcninrujlnrjiieinp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002995.8268647-1705-134547807455674/AnsiballZ_systemd.py'
Oct 09 09:43:17 compute-2 sudo[62737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:17 compute-2 python3.9[62739]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:43:17 compute-2 systemd[1]: Reloading.
Oct 09 09:43:17 compute-2 ceph-mon[5983]: pgmap v289: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:17 compute-2 systemd-sysv-generator[62765]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:43:17 compute-2 systemd-rc-local-generator[62762]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:43:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:17.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:17 compute-2 systemd[1]: Starting ovn_controller container...
Oct 09 09:43:17 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:43:17 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1132d56d96b82cea4f762b28bfa85a63cfdf43d1885ea300deef7166cf60aaf3/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct 09 09:43:17 compute-2 systemd[1]: Started /usr/bin/podman healthcheck run 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460.
Oct 09 09:43:17 compute-2 podman[62782]: 2025-10-09 09:43:17.721253125 +0000 UTC m=+0.078741962 container init 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:43:17 compute-2 ovn_controller[62794]: + sudo -E kolla_set_configs
Oct 09 09:43:17 compute-2 podman[62782]: 2025-10-09 09:43:17.739289809 +0000 UTC m=+0.096778626 container start 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct 09 09:43:17 compute-2 edpm-start-podman-container[62782]: ovn_controller
Oct 09 09:43:17 compute-2 systemd[1]: Created slice User Slice of UID 0.
Oct 09 09:43:17 compute-2 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 09 09:43:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:17 compute-2 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 09 09:43:17 compute-2 systemd[1]: Starting User Manager for UID 0...
Oct 09 09:43:17 compute-2 systemd[62820]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 09 09:43:17 compute-2 edpm-start-podman-container[62781]: Creating additional drop-in dependency for "ovn_controller" (2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460)
Oct 09 09:43:17 compute-2 podman[62801]: 2025-10-09 09:43:17.805352368 +0000 UTC m=+0.057909116 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 09 09:43:17 compute-2 systemd[1]: Reloading.
Oct 09 09:43:17 compute-2 systemd[62820]: Queued start job for default target Main User Target.
Oct 09 09:43:17 compute-2 systemd[62820]: Created slice User Application Slice.
Oct 09 09:43:17 compute-2 systemd[62820]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 09 09:43:17 compute-2 systemd[62820]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 09:43:17 compute-2 systemd[62820]: Reached target Paths.
Oct 09 09:43:17 compute-2 systemd[62820]: Reached target Timers.
Oct 09 09:43:17 compute-2 systemd[62820]: Starting D-Bus User Message Bus Socket...
Oct 09 09:43:17 compute-2 systemd[62820]: Starting Create User's Volatile Files and Directories...
Oct 09 09:43:17 compute-2 systemd-rc-local-generator[62867]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:43:17 compute-2 systemd[62820]: Listening on D-Bus User Message Bus Socket.
Oct 09 09:43:17 compute-2 systemd[62820]: Reached target Sockets.
Oct 09 09:43:17 compute-2 systemd[62820]: Finished Create User's Volatile Files and Directories.
Oct 09 09:43:17 compute-2 systemd[62820]: Reached target Basic System.
Oct 09 09:43:17 compute-2 systemd[62820]: Reached target Main User Target.
Oct 09 09:43:17 compute-2 systemd[62820]: Startup finished in 102ms.
Oct 09 09:43:17 compute-2 systemd-sysv-generator[62870]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:43:18 compute-2 systemd[1]: Started User Manager for UID 0.
Oct 09 09:43:18 compute-2 systemd[1]: Started ovn_controller container.
Oct 09 09:43:18 compute-2 systemd[1]: 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460-67d7d7bdc0cacc96.service: Main process exited, code=exited, status=1/FAILURE
Oct 09 09:43:18 compute-2 systemd[1]: 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460-67d7d7bdc0cacc96.service: Failed with result 'exit-code'.
Oct 09 09:43:18 compute-2 systemd[1]: Started Session c1 of User root.
Oct 09 09:43:18 compute-2 sudo[62737]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:18 compute-2 ovn_controller[62794]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 09 09:43:18 compute-2 ovn_controller[62794]: INFO:__main__:Validating config file
Oct 09 09:43:18 compute-2 ovn_controller[62794]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 09 09:43:18 compute-2 ovn_controller[62794]: INFO:__main__:Writing out command to execute
Oct 09 09:43:18 compute-2 systemd[1]: session-c1.scope: Deactivated successfully.
Oct 09 09:43:18 compute-2 ovn_controller[62794]: ++ cat /run_command
Oct 09 09:43:18 compute-2 ovn_controller[62794]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 09 09:43:18 compute-2 ovn_controller[62794]: + ARGS=
Oct 09 09:43:18 compute-2 ovn_controller[62794]: + sudo kolla_copy_cacerts
Oct 09 09:43:18 compute-2 systemd[1]: Started Session c2 of User root.
Oct 09 09:43:18 compute-2 ovn_controller[62794]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct 09 09:43:18 compute-2 ovn_controller[62794]: + [[ ! -n '' ]]
Oct 09 09:43:18 compute-2 ovn_controller[62794]: + . kolla_extend_start
Oct 09 09:43:18 compute-2 ovn_controller[62794]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct 09 09:43:18 compute-2 ovn_controller[62794]: + umask 0022
Oct 09 09:43:18 compute-2 ovn_controller[62794]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct 09 09:43:18 compute-2 systemd[1]: session-c2.scope: Deactivated successfully.
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct 09 09:43:18 compute-2 NetworkManager[984]: <info>  [1760002998.1465] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct 09 09:43:18 compute-2 NetworkManager[984]: <info>  [1760002998.1471] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:43:18 compute-2 NetworkManager[984]: <info>  [1760002998.1478] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct 09 09:43:18 compute-2 NetworkManager[984]: <info>  [1760002998.1482] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct 09 09:43:18 compute-2 NetworkManager[984]: <info>  [1760002998.1484] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 09 09:43:18 compute-2 kernel: br-int: entered promiscuous mode
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00018|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00024|main|INFO|OVS feature set changed, force recompute.
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00001|pinctrl(ovn_pinctrl1)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00002|rconn(ovn_pinctrl1)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00003|rconn(ovn_pinctrl1)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 09 09:43:18 compute-2 ovn_controller[62794]: 2025-10-09T09:43:18Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct 09 09:43:18 compute-2 NetworkManager[984]: <info>  [1760002998.1619] manager: (ovn-fc69d3-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct 09 09:43:18 compute-2 NetworkManager[984]: <info>  [1760002998.1623] manager: (ovn-ef2171-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Oct 09 09:43:18 compute-2 NetworkManager[984]: <info>  [1760002998.1662] manager: (ovn-1479fb-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Oct 09 09:43:18 compute-2 systemd-udevd[62921]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:43:18 compute-2 kernel: genev_sys_6081: entered promiscuous mode
Oct 09 09:43:18 compute-2 systemd-udevd[62922]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:43:18 compute-2 NetworkManager[984]: <info>  [1760002998.1770] device (genev_sys_6081): carrier: link connected
Oct 09 09:43:18 compute-2 NetworkManager[984]: <info>  [1760002998.1773] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Oct 09 09:43:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:18.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:18 compute-2 sudo[63051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rithazjxblzrzhrycgxcjdulcpmapwfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002998.2053864-1789-181142536316868/AnsiballZ_command.py'
Oct 09 09:43:18 compute-2 sudo[63051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:18 compute-2 python3.9[63053]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:43:18 compute-2 ovs-vsctl[63054]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct 09 09:43:18 compute-2 sudo[63051]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:18 compute-2 sudo[63204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdkprjngjqxqswysriuvwejdeugrhncw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002998.70767-1814-138423934712144/AnsiballZ_command.py'
Oct 09 09:43:18 compute-2 sudo[63204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:19 compute-2 python3.9[63206]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:43:19 compute-2 ovs-vsctl[63208]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct 09 09:43:19 compute-2 sudo[63204]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:19 compute-2 ceph-mon[5983]: pgmap v290: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:19.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:19 compute-2 sudo[63360]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmxbszcnlthzxeqyptjhrpkqlkvlefjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760002999.4193497-1855-44727812586274/AnsiballZ_command.py'
Oct 09 09:43:19 compute-2 sudo[63360]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:19 compute-2 python3.9[63362]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:43:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:19 compute-2 ovs-vsctl[63363]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct 09 09:43:19 compute-2 sudo[63360]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:20 compute-2 sshd-session[52001]: Connection closed by 192.168.122.30 port 41078
Oct 09 09:43:20 compute-2 sshd-session[51997]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:43:20 compute-2 systemd[1]: session-33.scope: Deactivated successfully.
Oct 09 09:43:20 compute-2 systemd[1]: session-33.scope: Consumed 41.376s CPU time.
Oct 09 09:43:20 compute-2 systemd-logind[800]: Session 33 logged out. Waiting for processes to exit.
Oct 09 09:43:20 compute-2 systemd-logind[800]: Removed session 33.
Oct 09 09:43:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct 09 09:43:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:20.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct 09 09:43:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:43:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:43:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:43:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:43:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:43:21 compute-2 ceph-mon[5983]: pgmap v291: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:21.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct 09 09:43:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:22.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct 09 09:43:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:23 compute-2 ceph-mon[5983]: pgmap v292: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:23.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:24.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:24 compute-2 sshd-session[63393]: Accepted publickey for zuul from 192.168.122.30 port 42136 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:43:24 compute-2 systemd-logind[800]: New session 35 of user zuul.
Oct 09 09:43:24 compute-2 systemd[1]: Started Session 35 of User zuul.
Oct 09 09:43:24 compute-2 sshd-session[63393]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:43:25 compute-2 ceph-mon[5983]: pgmap v293: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:25.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:25 compute-2 python3.9[63547]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:43:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:43:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:43:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:43:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:43:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:26.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:26 compute-2 sudo[63702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afaruuytaemvpvpmzqauwmliequbczjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003006.1805024-64-115149082799493/AnsiballZ_file.py'
Oct 09 09:43:26 compute-2 sudo[63702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:26 compute-2 python3.9[63704]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:26 compute-2 sudo[63702]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:26 compute-2 sudo[63854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dijqrfusnwvcwakityfqukgnfnghneja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003006.786489-64-118023311196304/AnsiballZ_file.py'
Oct 09 09:43:26 compute-2 sudo[63854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:27 compute-2 python3.9[63856]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:27 compute-2 sudo[63854]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:27 compute-2 sudo[64006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubdaazsembvvrgxfkeyghmclqdqvcltr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003007.249216-64-89023257257381/AnsiballZ_file.py'
Oct 09 09:43:27 compute-2 sudo[64006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:27 compute-2 ceph-mon[5983]: pgmap v294: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:27.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:27 compute-2 python3.9[64008]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:27 compute-2 sudo[64006]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:27 compute-2 sudo[64159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdxczsmguyzagpkzlhmopjopxuxvljzp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003007.7043655-64-68743375443563/AnsiballZ_file.py'
Oct 09 09:43:27 compute-2 sudo[64159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:28 compute-2 python3.9[64161]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:28 compute-2 sudo[64159]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:28 compute-2 systemd[1]: Stopping User Manager for UID 0...
Oct 09 09:43:28 compute-2 systemd[62820]: Activating special unit Exit the Session...
Oct 09 09:43:28 compute-2 systemd[62820]: Stopped target Main User Target.
Oct 09 09:43:28 compute-2 systemd[62820]: Stopped target Basic System.
Oct 09 09:43:28 compute-2 systemd[62820]: Stopped target Paths.
Oct 09 09:43:28 compute-2 systemd[62820]: Stopped target Sockets.
Oct 09 09:43:28 compute-2 systemd[62820]: Stopped target Timers.
Oct 09 09:43:28 compute-2 systemd[62820]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 09 09:43:28 compute-2 systemd[62820]: Closed D-Bus User Message Bus Socket.
Oct 09 09:43:28 compute-2 systemd[62820]: Stopped Create User's Volatile Files and Directories.
Oct 09 09:43:28 compute-2 systemd[62820]: Removed slice User Application Slice.
Oct 09 09:43:28 compute-2 systemd[62820]: Reached target Shutdown.
Oct 09 09:43:28 compute-2 systemd[62820]: Finished Exit the Session.
Oct 09 09:43:28 compute-2 systemd[62820]: Reached target Exit the Session.
Oct 09 09:43:28 compute-2 systemd[1]: user@0.service: Deactivated successfully.
Oct 09 09:43:28 compute-2 systemd[1]: Stopped User Manager for UID 0.
Oct 09 09:43:28 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 09 09:43:28 compute-2 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 09 09:43:28 compute-2 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 09 09:43:28 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 09 09:43:28 compute-2 systemd[1]: Removed slice User Slice of UID 0.
Oct 09 09:43:28 compute-2 sudo[64314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbuchobxlprxnakiswvqlyztkuhljenl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003008.1706407-64-277617115028598/AnsiballZ_file.py'
Oct 09 09:43:28 compute-2 sudo[64314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct 09 09:43:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:28.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct 09 09:43:28 compute-2 python3.9[64316]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:28 compute-2 sudo[64314]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:29 compute-2 sudo[64467]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:43:29 compute-2 sudo[64467]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:43:29 compute-2 sudo[64467]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:29 compute-2 python3.9[64466]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:43:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:29 compute-2 ceph-mon[5983]: pgmap v295: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:29.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:29 compute-2 sudo[64642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvbxbxbsytgwzsdoslsfnofqarumpwlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003009.31023-196-30284343212765/AnsiballZ_seboolean.py'
Oct 09 09:43:29 compute-2 sudo[64642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:29 compute-2 python3.9[64644]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct 09 09:43:30 compute-2 sudo[64642]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct 09 09:43:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:30.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct 09 09:43:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:30 compute-2 python3.9[64795]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:43:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:43:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:43:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:43:31 compute-2 python3.9[64916]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003010.4273999-220-235911692735263/.source follow=False _original_basename=haproxy.j2 checksum=4bca74f6ee0b6450624d22997e2f90c414d58b44 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:31 compute-2 ceph-mon[5983]: pgmap v296: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct 09 09:43:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:31.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct 09 09:43:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:31 compute-2 python3.9[65067]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:32 compute-2 python3.9[65189]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003011.5769122-265-231101159802754/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:32.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:32 compute-2 sudo[65339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipiurbnlzkuaexpfjgzjhgtjbfjwgpyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003012.5956879-316-180203033276397/AnsiballZ_setup.py'
Oct 09 09:43:32 compute-2 sudo[65339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:33 compute-2 python3.9[65341]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:43:33 compute-2 sudo[65339]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:33 compute-2 sudo[65424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpfapnxgetsbsgcunnetrhnpfjcqcgyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003012.5956879-316-180203033276397/AnsiballZ_dnf.py'
Oct 09 09:43:33 compute-2 sudo[65424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:33 compute-2 ceph-mon[5983]: pgmap v297: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:33.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:33 compute-2 python3.9[65426]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:43:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:34.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:34 compute-2 sudo[65424]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:35 compute-2 sudo[65578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryejnwltqmbabcauqqlxknyhqekdwszk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003014.7697554-352-101414082554466/AnsiballZ_systemd.py'
Oct 09 09:43:35 compute-2 sudo[65578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:35 compute-2 python3.9[65580]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 09:43:35 compute-2 sudo[65578]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:35 compute-2 ceph-mon[5983]: pgmap v298: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:43:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:35.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:43:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:43:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:43:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:43:36 compute-2 python3.9[65734]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:36.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:36 compute-2 python3.9[65857]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003015.6655996-376-19616262839221/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:36 compute-2 python3.9[66007]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:37 compute-2 python3.9[66128]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003016.5473557-376-183098665605930/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:37 compute-2 ceph-mon[5983]: pgmap v299: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:37.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:38 compute-2 python3.9[66280]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000006s ======
Oct 09 09:43:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:38.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000006s
Oct 09 09:43:38 compute-2 python3.9[66401]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003017.976951-508-226953453418663/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:39 compute-2 python3.9[66551]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:39 compute-2 ceph-mon[5983]: pgmap v300: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:39.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:39 compute-2 python3.9[66672]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003018.8595133-508-66796641543675/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:40 compute-2 python3.9[66823]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:43:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct 09 09:43:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:40.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct 09 09:43:40 compute-2 sudo[66976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oypdzajjsofrcmzxbivpjyvgzenoeevj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003020.2773597-622-201472599165639/AnsiballZ_file.py'
Oct 09 09:43:40 compute-2 sudo[66976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:40 compute-2 python3.9[66978]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:40 compute-2 sudo[66976]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:43:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:43:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:43:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:43:41 compute-2 sudo[67128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qitylyadpwduxchzdthsvcpkhnsosasm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003020.7986324-646-71412193045773/AnsiballZ_stat.py'
Oct 09 09:43:41 compute-2 sudo[67128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:41 compute-2 python3.9[67130]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:41 compute-2 sudo[67128]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:41 compute-2 sudo[67206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khuyniijkwosjniakdazxlfjyelthxqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003020.7986324-646-71412193045773/AnsiballZ_file.py'
Oct 09 09:43:41 compute-2 sudo[67206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:41 compute-2 python3.9[67208]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:41 compute-2 sudo[67206]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:41 compute-2 ceph-mon[5983]: pgmap v301: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:41.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:41 compute-2 sudo[67359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvrrrikffdraxxnidckqhgdwljogasdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003021.6037314-646-79053197489810/AnsiballZ_stat.py'
Oct 09 09:43:41 compute-2 sudo[67359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:41 compute-2 python3.9[67361]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:41 compute-2 sudo[67359]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:42 compute-2 sudo[67437]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqspdwsfdhbpxkfnfmufbxxnnougyget ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003021.6037314-646-79053197489810/AnsiballZ_file.py'
Oct 09 09:43:42 compute-2 sudo[67437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:42 compute-2 python3.9[67439]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:42 compute-2 sudo[67437]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct 09 09:43:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:42.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct 09 09:43:42 compute-2 sudo[67590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwglyirghatvmhcgswrcuowkjbbvkzos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003022.4097292-715-254151998459500/AnsiballZ_file.py'
Oct 09 09:43:42 compute-2 sudo[67590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:42 compute-2 python3.9[67592]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:42 compute-2 sudo[67590]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:43 compute-2 sudo[67742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olckxfylybxggkbvtlaxthlomoldinho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003022.8997283-739-257153233552167/AnsiballZ_stat.py'
Oct 09 09:43:43 compute-2 sudo[67742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:43 compute-2 python3.9[67744]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:43 compute-2 sudo[67742]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:43 compute-2 sudo[67820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahuwnevzewqrycgrpjrfoogilybmrwzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003022.8997283-739-257153233552167/AnsiballZ_file.py'
Oct 09 09:43:43 compute-2 sudo[67820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:43.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:43 compute-2 ceph-mon[5983]: pgmap v302: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:43 compute-2 python3.9[67822]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:43 compute-2 sudo[67820]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:43 compute-2 sudo[67973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bppswohiehvnjskfbyvkxmjakhodlznx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003023.7449396-776-175759480061369/AnsiballZ_stat.py'
Oct 09 09:43:43 compute-2 sudo[67973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:44 compute-2 python3.9[67975]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:44 compute-2 sudo[67973]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:44 compute-2 sudo[68052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htpkftgmsqkrvkxuivnlgxdzawvcqkvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003023.7449396-776-175759480061369/AnsiballZ_file.py'
Oct 09 09:43:44 compute-2 sudo[68052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:44.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:44 compute-2 python3.9[68054]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:44 compute-2 sudo[68052]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:44 compute-2 sudo[68204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egpjerbmnxlrwkehzfvpcxhghwwzgzwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003024.5711334-812-164495655486045/AnsiballZ_systemd.py'
Oct 09 09:43:44 compute-2 sudo[68204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:45 compute-2 python3.9[68206]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:43:45 compute-2 systemd[1]: Reloading.
Oct 09 09:43:45 compute-2 systemd-rc-local-generator[68229]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:43:45 compute-2 systemd-sysv-generator[68233]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:43:45 compute-2 sudo[68204]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:45.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:45 compute-2 ceph-mon[5983]: pgmap v303: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:45 compute-2 sudo[68395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dytraxzsxaeqencmtbyepjocxtzvwuhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003025.4144027-835-58844943508007/AnsiballZ_stat.py'
Oct 09 09:43:45 compute-2 sudo[68395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:45 compute-2 python3.9[68397]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:45 compute-2 sudo[68395]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:45 compute-2 sudo[68473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dypxcgyjfpjcfhflkthbqntvorwjodtl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003025.4144027-835-58844943508007/AnsiballZ_file.py'
Oct 09 09:43:45 compute-2 sudo[68473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:43:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:43:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:43:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:43:46 compute-2 python3.9[68475]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:46 compute-2 sudo[68473]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:46.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:46 compute-2 sudo[68626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuxndomfuefgzolccjljbkjmxsbbfbvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003026.2339547-871-108495238439663/AnsiballZ_stat.py'
Oct 09 09:43:46 compute-2 sudo[68626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:46 compute-2 python3.9[68628]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:46 compute-2 sudo[68626]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:46 compute-2 sudo[68704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rornkplyzmxmkkblncihwoyltytxnuon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003026.2339547-871-108495238439663/AnsiballZ_file.py'
Oct 09 09:43:46 compute-2 sudo[68704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:46 compute-2 python3.9[68706]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:46 compute-2 sudo[68704]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:47 compute-2 sudo[68856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qarmwcvqczpiqaskomlohiksewigikjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003027.0692549-907-275605636942722/AnsiballZ_systemd.py'
Oct 09 09:43:47 compute-2 sudo[68856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:47 compute-2 python3.9[68858]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:43:47 compute-2 systemd[1]: Reloading.
Oct 09 09:43:47 compute-2 systemd-rc-local-generator[68880]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:43:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:47.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:47 compute-2 systemd-sysv-generator[68883]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:43:47 compute-2 ceph-mon[5983]: pgmap v304: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:47 compute-2 systemd[1]: Starting Create netns directory...
Oct 09 09:43:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:47 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 09 09:43:47 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 09 09:43:47 compute-2 systemd[1]: Finished Create netns directory.
Oct 09 09:43:47 compute-2 sudo[68856]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:48 compute-2 ovn_controller[62794]: 2025-10-09T09:43:48Z|00025|memory|INFO|16128 kB peak resident set size after 30.1 seconds
Oct 09 09:43:48 compute-2 ovn_controller[62794]: 2025-10-09T09:43:48Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Oct 09 09:43:48 compute-2 podman[69001]: 2025-10-09 09:43:48.23002796 +0000 UTC m=+0.060980065 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:43:48 compute-2 sudo[69074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urxiuuvyxoiwywohmvracwxinifjxdcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003028.0487475-937-206356975451426/AnsiballZ_file.py'
Oct 09 09:43:48 compute-2 sudo[69074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct 09 09:43:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:48.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct 09 09:43:48 compute-2 python3.9[69076]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:48 compute-2 sudo[69074]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:48 compute-2 sudo[69226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpdglvuqteeqtknmldhrcxdxwpkwufmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003028.5918083-962-202310641851947/AnsiballZ_stat.py'
Oct 09 09:43:48 compute-2 sudo[69226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:48 compute-2 python3.9[69228]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:48 compute-2 sudo[69226]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:49 compute-2 sudo[69311]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:43:49 compute-2 sudo[69311]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:43:49 compute-2 sudo[69311]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:49 compute-2 sudo[69374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osppfkzhrowbunmkcfcofpqzkmjhrpgj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003028.5918083-962-202310641851947/AnsiballZ_copy.py'
Oct 09 09:43:49 compute-2 sudo[69374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:49 compute-2 python3.9[69376]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003028.5918083-962-202310641851947/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:49 compute-2 sudo[69374]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:49.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:49 compute-2 ceph-mon[5983]: pgmap v305: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:43:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:49 compute-2 sudo[69527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwkfjkodmokzerjjtjcintblatefjltj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003029.6947873-1012-169698821209600/AnsiballZ_file.py'
Oct 09 09:43:49 compute-2 sudo[69527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:50 compute-2 python3.9[69529]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:43:50 compute-2 sudo[69527]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:50.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:50 compute-2 sudo[69680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdnwmbrrhoymphgpacyvmcijhbslxkbx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003030.2235723-1036-15800937930524/AnsiballZ_stat.py'
Oct 09 09:43:50 compute-2 sudo[69680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:50 compute-2 python3.9[69682]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:43:50 compute-2 sudo[69680]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:50 compute-2 sudo[69803]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lrvepwvikyzzsbnjmelwdrlpirqyswrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003030.2235723-1036-15800937930524/AnsiballZ_copy.py'
Oct 09 09:43:50 compute-2 sudo[69803]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:50 compute-2 python3.9[69805]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003030.2235723-1036-15800937930524/.source.json _original_basename=.dx0hf3hb follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:43:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:43:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:43:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:43:51 compute-2 sudo[69803]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:51 compute-2 sudo[69955]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-svujgjzzpebllbxwtwcuwcsfcfhefqli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003031.1531246-1081-223338870461676/AnsiballZ_file.py'
Oct 09 09:43:51 compute-2 sudo[69955]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:51 compute-2 python3.9[69957]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:43:51 compute-2 sudo[69955]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:51.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:51 compute-2 ceph-mon[5983]: pgmap v306: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:51 compute-2 sudo[70108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajzrihdaaukwffulioiezvpsaaegmhyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003031.703187-1105-187648047349065/AnsiballZ_stat.py'
Oct 09 09:43:51 compute-2 sudo[70108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:52 compute-2 sudo[70108]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:52 compute-2 sudo[70232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybzzatplmhlztjbwkrsbfkzhbzsmkwol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003031.703187-1105-187648047349065/AnsiballZ_copy.py'
Oct 09 09:43:52 compute-2 sudo[70232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000006s ======
Oct 09 09:43:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:52.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000006s
Oct 09 09:43:52 compute-2 sudo[70232]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:53 compute-2 sudo[70384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwaatiissijgfgrliykrwtoevgujudnb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003032.7784936-1156-147246318219033/AnsiballZ_container_config_data.py'
Oct 09 09:43:53 compute-2 sudo[70384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:53 compute-2 python3.9[70386]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct 09 09:43:53 compute-2 sudo[70384]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:53.577 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:53 compute-2 ceph-mon[5983]: pgmap v307: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:53 compute-2 sudo[70537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztswtabziffukiyqrlvfdwsyfeqyznkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003033.480765-1183-183237232400944/AnsiballZ_container_config_hash.py'
Oct 09 09:43:53 compute-2 sudo[70537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:53 compute-2 python3.9[70539]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 09 09:43:53 compute-2 sudo[70537]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000007s ======
Oct 09 09:43:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:54.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000007s
Oct 09 09:43:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:54 compute-2 sudo[70690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqfsbqymbijyzcphwbicbjrfxlpmnsru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003034.17296-1210-102549001623887/AnsiballZ_podman_container_info.py'
Oct 09 09:43:54 compute-2 sudo[70690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:54 compute-2 python3.9[70692]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 09 09:43:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:54 compute-2 sudo[70690]: pam_unix(sudo:session): session closed for user root
Oct 09 09:43:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:55.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:55 compute-2 ceph-mon[5983]: pgmap v308: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:55 compute-2 sudo[70862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmfivkzkhpldfraeaunviqnyjsvqtewg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760003035.555418-1249-267501539807941/AnsiballZ_edpm_container_manage.py'
Oct 09 09:43:55 compute-2 sudo[70862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:43:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:43:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:43:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:43:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:43:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:43:56 compute-2 python3[70864]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 09 09:43:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:56.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000006s ======
Oct 09 09:43:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:57.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000006s
Oct 09 09:43:57 compute-2 ceph-mon[5983]: pgmap v309: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:43:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:43:58.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:43:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:43:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:43:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:43:59.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:43:59 compute-2 ceph-mon[5983]: pgmap v310: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:43:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:43:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:43:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:43:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:00.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:44:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:44:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:44:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:44:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:01.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:01 compute-2 ceph-mon[5983]: pgmap v311: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:02.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:03.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:03 compute-2 ceph-mon[5983]: pgmap v312: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:44:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:04.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:04 compute-2 podman[70876]: 2025-10-09 09:44:04.521321872 +0000 UTC m=+8.357611900 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 09 09:44:04 compute-2 podman[70983]: 2025-10-09 09:44:04.616499572 +0000 UTC m=+0.028427864 container create aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 09 09:44:04 compute-2 podman[70983]: 2025-10-09 09:44:04.603854343 +0000 UTC m=+0.015782655 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 09 09:44:04 compute-2 python3[70864]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 09 09:44:04 compute-2 sudo[70862]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:44:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:05 compute-2 sudo[71161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axzvewoxwhxqrtmuknkghudgdmjjjujr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003045.18452-1273-238346478774060/AnsiballZ_stat.py'
Oct 09 09:44:05 compute-2 sudo[71161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:05 compute-2 python3.9[71163]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:44:05 compute-2 sudo[71161]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:05.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:05 compute-2 ceph-mon[5983]: pgmap v313: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:05 compute-2 sudo[71316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xnbpvukrxbgynfgyupfvnpdripbfbfml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003045.7610912-1300-165902568439959/AnsiballZ_file.py'
Oct 09 09:44:05 compute-2 sudo[71316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:44:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:44:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:44:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:44:06 compute-2 python3.9[71318]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:06 compute-2 sudo[71316]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:06 compute-2 sudo[71393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rasvbomeqgmwxycysuuxuvavqagkluuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003045.7610912-1300-165902568439959/AnsiballZ_stat.py'
Oct 09 09:44:06 compute-2 sudo[71393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:06.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:06 compute-2 python3.9[71395]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:44:06 compute-2 sudo[71393]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:06 compute-2 sudo[71544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhhjymajvxpfnnjddqbqebapxgawrnaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003046.4922872-1300-212613524092654/AnsiballZ_copy.py'
Oct 09 09:44:06 compute-2 sudo[71544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:06 compute-2 python3.9[71546]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760003046.4922872-1300-212613524092654/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:06 compute-2 sudo[71544]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:07 compute-2 sudo[71620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wptlmnuxpvyvhzpnitgtxmemyweflccu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003046.4922872-1300-212613524092654/AnsiballZ_systemd.py'
Oct 09 09:44:07 compute-2 sudo[71620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:07 compute-2 python3.9[71622]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 09:44:07 compute-2 systemd[1]: Reloading.
Oct 09 09:44:07 compute-2 systemd-sysv-generator[71651]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:44:07 compute-2 systemd-rc-local-generator[71645]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:44:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:07.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:07 compute-2 sudo[71620]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:07 compute-2 ceph-mon[5983]: pgmap v314: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:44:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:07 compute-2 sudo[71732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhkhwmzpzpfquitqrqxflzpyxirsyaab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003046.4922872-1300-212613524092654/AnsiballZ_systemd.py'
Oct 09 09:44:07 compute-2 sudo[71732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:08 compute-2 python3.9[71734]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:44:08 compute-2 systemd[1]: Reloading.
Oct 09 09:44:08 compute-2 systemd-sysv-generator[71761]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:44:08 compute-2 systemd-rc-local-generator[71758]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:44:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:08.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:08 compute-2 systemd[1]: Starting ovn_metadata_agent container...
Oct 09 09:44:08 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:44:08 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cb402dff26e4d2635338b2d9f3c87774b48de2412b5a737e7b0cd9dd54e99e/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct 09 09:44:08 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98cb402dff26e4d2635338b2d9f3c87774b48de2412b5a737e7b0cd9dd54e99e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 09:44:08 compute-2 systemd[1]: Started /usr/bin/podman healthcheck run aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335.
Oct 09 09:44:08 compute-2 podman[71776]: 2025-10-09 09:44:08.542509982 +0000 UTC m=+0.089556496 container init aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: + sudo -E kolla_set_configs
Oct 09 09:44:08 compute-2 podman[71776]: 2025-10-09 09:44:08.564285202 +0000 UTC m=+0.111331696 container start aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 09 09:44:08 compute-2 edpm-start-podman-container[71776]: ovn_metadata_agent
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Validating config file
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Copying service configuration files
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Writing out command to execute
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Setting permission for /var/lib/neutron
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: ++ cat /run_command
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: + CMD=neutron-ovn-metadata-agent
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: + ARGS=
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: + sudo kolla_copy_cacerts
Oct 09 09:44:08 compute-2 podman[71795]: 2025-10-09 09:44:08.626581351 +0000 UTC m=+0.053213757 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 09 09:44:08 compute-2 edpm-start-podman-container[71775]: Creating additional drop-in dependency for "ovn_metadata_agent" (aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335)
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: + [[ ! -n '' ]]
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: + . kolla_extend_start
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: Running command: 'neutron-ovn-metadata-agent'
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: + umask 0022
Oct 09 09:44:08 compute-2 ovn_metadata_agent[71788]: + exec neutron-ovn-metadata-agent
Oct 09 09:44:08 compute-2 systemd[1]: Reloading.
Oct 09 09:44:08 compute-2 systemd-rc-local-generator[71854]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:44:08 compute-2 systemd-sysv-generator[71861]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:44:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:08 compute-2 systemd[1]: Started ovn_metadata_agent container.
Oct 09 09:44:08 compute-2 sudo[71732]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:09 compute-2 sudo[71894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:44:09 compute-2 sudo[71894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:44:09 compute-2 sudo[71894]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:09 compute-2 sshd-session[63396]: Connection closed by 192.168.122.30 port 42136
Oct 09 09:44:09 compute-2 sshd-session[63393]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:44:09 compute-2 systemd[1]: session-35.scope: Deactivated successfully.
Oct 09 09:44:09 compute-2 systemd[1]: session-35.scope: Consumed 41.766s CPU time.
Oct 09 09:44:09 compute-2 systemd-logind[800]: Session 35 logged out. Waiting for processes to exit.
Oct 09 09:44:09 compute-2 systemd-logind[800]: Removed session 35.
Oct 09 09:44:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:09.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:09 compute-2 sudo[71921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:44:09 compute-2 sudo[71921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:44:09 compute-2 sudo[71921]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:09 compute-2 ceph-mon[5983]: pgmap v315: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:09 compute-2 sudo[71946]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:44:09 compute-2 sudo[71946]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:44:10 compute-2 sudo[71946]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.222 71793 INFO neutron.common.config [-] Logging enabled!
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.222 71793 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.222 71793 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.222 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.222 71793 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.223 71793 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.223 71793 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.223 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.223 71793 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.223 71793 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.223 71793 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.223 71793 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.223 71793 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.223 71793 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.224 71793 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.224 71793 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.224 71793 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.224 71793 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.224 71793 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.224 71793 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.224 71793 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.224 71793 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.224 71793 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.225 71793 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.225 71793 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.225 71793 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.225 71793 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.225 71793 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.225 71793 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.225 71793 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.225 71793 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.225 71793 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.225 71793 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.226 71793 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.226 71793 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.226 71793 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.226 71793 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.226 71793 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.226 71793 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.226 71793 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.226 71793 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.227 71793 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.227 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.227 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.227 71793 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.227 71793 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.227 71793 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.227 71793 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.227 71793 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.227 71793 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.227 71793 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.228 71793 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.228 71793 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.228 71793 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.228 71793 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.228 71793 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.228 71793 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.228 71793 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.228 71793 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.228 71793 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.228 71793 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.229 71793 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.229 71793 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.229 71793 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.229 71793 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.229 71793 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.229 71793 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.229 71793 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.229 71793 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.229 71793 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.230 71793 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.230 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.230 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.230 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.230 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.230 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.230 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.230 71793 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.230 71793 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.230 71793 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.231 71793 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.231 71793 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.231 71793 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.231 71793 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.231 71793 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.231 71793 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.231 71793 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.231 71793 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.231 71793 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.232 71793 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.232 71793 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.232 71793 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.232 71793 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.232 71793 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.232 71793 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.232 71793 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.232 71793 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.232 71793 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.232 71793 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.233 71793 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.233 71793 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.233 71793 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.233 71793 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.233 71793 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.233 71793 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.233 71793 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.233 71793 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.233 71793 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.233 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.234 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.234 71793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.234 71793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.234 71793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.234 71793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.234 71793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.234 71793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.234 71793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.234 71793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.235 71793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.235 71793 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.235 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.235 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.235 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.235 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.235 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.235 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.235 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.235 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.236 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.236 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.236 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.236 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.236 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.236 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.236 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.236 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.236 71793 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.237 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.237 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.237 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.237 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.237 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.237 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.237 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.237 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.238 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.238 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.238 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.238 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.238 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.238 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.238 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.238 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.238 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.238 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.239 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.239 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.239 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.239 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.239 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.239 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.239 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.239 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.239 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.240 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.240 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.240 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.240 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.240 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.240 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.240 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.240 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.240 71793 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.241 71793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.241 71793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.241 71793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.241 71793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.241 71793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.241 71793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.241 71793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.241 71793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.241 71793 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.241 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.242 71793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.242 71793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.242 71793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.242 71793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.242 71793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.242 71793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.242 71793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.242 71793 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.242 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.243 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.243 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.243 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.243 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.243 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.243 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.243 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.243 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.243 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.243 71793 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.244 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.244 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.244 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.244 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.244 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.244 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.244 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.244 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.244 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.245 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.245 71793 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.245 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.245 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.245 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.245 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.245 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.245 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.245 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.245 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.246 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.246 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.246 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.246 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.246 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.246 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.246 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.246 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.246 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.246 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.247 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.247 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.247 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.247 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.247 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.247 71793 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.247 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.248 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.248 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.248 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.248 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.248 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.248 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.248 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.248 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.248 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.249 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.249 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.249 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.249 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.249 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.249 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.249 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.249 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.249 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.250 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.250 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.250 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.250 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.250 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.250 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.250 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.250 71793 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.250 71793 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.251 71793 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.251 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.251 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.251 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.251 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.251 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.251 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.251 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.251 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.251 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.252 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.252 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.252 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.252 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.252 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.252 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.252 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.252 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.252 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.253 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.253 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.253 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.253 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.253 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.253 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.253 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.254 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.254 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.254 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.254 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.254 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.254 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.254 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.255 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.255 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.255 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.255 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.255 71793 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.255 71793 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.262 71793 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.262 71793 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.263 71793 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.263 71793 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.263 71793 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.274 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name c24becb7-a313-4586-a73e-1530a4367da3 (UUID: c24becb7-a313-4586-a73e-1530a4367da3) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.291 71793 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.291 71793 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.291 71793 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.291 71793 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.293 71793 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.298 71793 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.302 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'c24becb7-a313-4586-a73e-1530a4367da3'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f38807e66d0>], external_ids={}, name=c24becb7-a313-4586-a73e-1530a4367da3, nb_cfg_timestamp=1760003006161, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.303 71793 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f38807e6af0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.303 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.304 71793 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.304 71793 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.304 71793 INFO oslo_service.service [-] Starting 1 workers
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.308 71793 DEBUG oslo_service.service [-] Started child 72001 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.311 72001 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-895417'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.311 71793 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmp3edxmyi4/privsep.sock']
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.327 72001 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.328 72001 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.328 72001 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.330 72001 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.335 72001 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.339 72001 INFO eventlet.wsgi.server [-] (72001) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Oct 09 09:44:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:10.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:10 compute-2 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.847 71793 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.847 71793 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp3edxmyi4/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.764 72006 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.768 72006 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.769 72006 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.770 72006 INFO oslo.privsep.daemon [-] privsep daemon running as pid 72006
Oct 09 09:44:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:10.850 72006 DEBUG oslo.privsep.daemon [-] privsep: reply[f1a70575-8af0-4ee8-8eed-e12c59ebaa37]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:44:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:44:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:44:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:44:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:44:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:44:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:44:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:44:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:44:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:44:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:44:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:44:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:44:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.256 72006 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.256 72006 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.256 72006 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:44:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:11.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.717 72006 DEBUG oslo.privsep.daemon [-] privsep: reply[af3c2038-a93d-4b73-8057-cad477566a60]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.719 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, column=external_ids, values=({'neutron:ovn-metadata-id': '2b22cda5-e8f4-5cad-b7de-4c4bd08d93f0'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.724 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.728 71793 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.728 71793 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.728 71793 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.728 71793 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.728 71793 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.728 71793 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.728 71793 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.729 71793 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.729 71793 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.729 71793 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.729 71793 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.729 71793 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.729 71793 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.729 71793 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.729 71793 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.730 71793 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.730 71793 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.730 71793 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.730 71793 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.730 71793 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.730 71793 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.730 71793 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.730 71793 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.730 71793 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.731 71793 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.731 71793 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.731 71793 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.731 71793 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.731 71793 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.731 71793 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.731 71793 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.731 71793 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.731 71793 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.732 71793 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.732 71793 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.732 71793 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.732 71793 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.732 71793 DEBUG oslo_service.service [-] host                           = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.732 71793 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.732 71793 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.732 71793 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.733 71793 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.733 71793 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.733 71793 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.733 71793 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.733 71793 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.733 71793 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.733 71793 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.733 71793 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.733 71793 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.734 71793 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.734 71793 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.734 71793 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.734 71793 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.734 71793 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.734 71793 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.734 71793 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.734 71793 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.734 71793 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.734 71793 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.735 71793 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.735 71793 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.735 71793 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.735 71793 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.735 71793 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.735 71793 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.735 71793 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.735 71793 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.735 71793 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.735 71793 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.736 71793 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.736 71793 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.736 71793 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.736 71793 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.736 71793 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.736 71793 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.736 71793 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.736 71793 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.736 71793 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.737 71793 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.737 71793 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.737 71793 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.737 71793 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.737 71793 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.737 71793 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.737 71793 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.737 71793 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.737 71793 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.737 71793 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.738 71793 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.738 71793 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.738 71793 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.738 71793 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.738 71793 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.738 71793 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.738 71793 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.738 71793 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.738 71793 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.738 71793 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.739 71793 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.739 71793 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.739 71793 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.739 71793 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.739 71793 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.739 71793 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.739 71793 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.739 71793 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.739 71793 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.739 71793 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.740 71793 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.740 71793 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.740 71793 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.740 71793 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.740 71793 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.740 71793 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.740 71793 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.740 71793 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.741 71793 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.741 71793 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.741 71793 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.741 71793 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.741 71793 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.741 71793 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.741 71793 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.741 71793 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.742 71793 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.742 71793 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.742 71793 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.742 71793 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.742 71793 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.742 71793 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.742 71793 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.742 71793 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.742 71793 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.742 71793 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.743 71793 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.743 71793 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.743 71793 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.743 71793 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.743 71793 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.743 71793 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.743 71793 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.743 71793 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.743 71793 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.744 71793 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.744 71793 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.744 71793 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.744 71793 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.744 71793 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.744 71793 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.744 71793 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.744 71793 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.744 71793 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.744 71793 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.745 71793 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.746 71793 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.746 71793 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.746 71793 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.746 71793 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.746 71793 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.746 71793 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.746 71793 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.746 71793 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.746 71793 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.746 71793 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.747 71793 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.747 71793 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.747 71793 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.747 71793 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.747 71793 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.747 71793 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.747 71793 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.747 71793 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.747 71793 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.748 71793 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.748 71793 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.748 71793 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.748 71793 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.748 71793 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.748 71793 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.748 71793 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.748 71793 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.748 71793 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.749 71793 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.749 71793 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.749 71793 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.749 71793 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.749 71793 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.749 71793 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.749 71793 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.749 71793 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.749 71793 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.749 71793 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.750 71793 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.751 71793 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.751 71793 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.751 71793 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.751 71793 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.751 71793 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.751 71793 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.751 71793 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.751 71793 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.751 71793 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.751 71793 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.752 71793 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.753 71793 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.753 71793 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.753 71793 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.753 71793 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.753 71793 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.753 71793 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.753 71793 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.753 71793 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.753 71793 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.754 71793 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.754 71793 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.754 71793 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.754 71793 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.754 71793 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.754 71793 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.754 71793 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.754 71793 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.754 71793 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.754 71793 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.755 71793 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.755 71793 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.755 71793 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.755 71793 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.755 71793 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.755 71793 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.755 71793 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.755 71793 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.755 71793 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.755 71793 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.756 71793 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.756 71793 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.756 71793 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.756 71793 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.756 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.756 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.756 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.756 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.756 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.757 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.757 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.757 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.757 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.757 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.757 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.757 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.757 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.757 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.758 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.758 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.758 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.758 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.758 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.758 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.758 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.758 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.758 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.758 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.759 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.759 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.759 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.759 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.759 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.759 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.759 71793 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.759 71793 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.759 71793 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.759 71793 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.760 71793 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:44:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:44:11.760 71793 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 09 09:44:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:12 compute-2 ceph-mon[5983]: pgmap v316: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:12.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:13.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:13 compute-2 sudo[72014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:44:13 compute-2 sudo[72014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:44:13 compute-2 sudo[72014]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:14 compute-2 ceph-mon[5983]: pgmap v317: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:44:14 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:44:14 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:44:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:14.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:14 compute-2 sshd-session[72040]: Accepted publickey for zuul from 192.168.122.30 port 44776 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:44:14 compute-2 systemd-logind[800]: New session 36 of user zuul.
Oct 09 09:44:14 compute-2 systemd[1]: Started Session 36 of User zuul.
Oct 09 09:44:14 compute-2 sshd-session[72040]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:44:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:15 compute-2 python3.9[72193]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:44:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:15.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:44:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:44:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:44:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:44:16 compute-2 sudo[72348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccdlywwwhruqzhsmfzbcngletmrpdlkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003055.7440934-65-9620470510077/AnsiballZ_command.py'
Oct 09 09:44:16 compute-2 sudo[72348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:16 compute-2 ceph-mon[5983]: pgmap v318: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:16 compute-2 python3.9[72350]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:44:16 compute-2 sudo[72348]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:16.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:17 compute-2 sudo[72510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eujdcuukftxnyzedkmovswxqjrbjpzbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003056.5867887-97-230848948046302/AnsiballZ_systemd_service.py'
Oct 09 09:44:17 compute-2 sudo[72510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:17 compute-2 python3.9[72512]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 09:44:17 compute-2 systemd[1]: Reloading.
Oct 09 09:44:17 compute-2 systemd-sysv-generator[72538]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:44:17 compute-2 systemd-rc-local-generator[72535]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:44:17 compute-2 sudo[72510]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:17.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:18 compute-2 ceph-mon[5983]: pgmap v319: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:44:18 compute-2 python3.9[72698]: ansible-ansible.builtin.service_facts Invoked
Oct 09 09:44:18 compute-2 network[72716]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 09 09:44:18 compute-2 network[72717]: 'network-scripts' will be removed from distribution in near future.
Oct 09 09:44:18 compute-2 network[72718]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 09 09:44:18 compute-2 podman[72723]: 2025-10-09 09:44:18.379669528 +0000 UTC m=+0.067990031 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 09 09:44:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:44:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:18.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:44:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:19.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:44:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:44:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:44:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:44:20 compute-2 ceph-mon[5983]: pgmap v320: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:44:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:44:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:44:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:20.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:44:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:21.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:21 compute-2 sudo[73008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbdutxdpnpnztntcebmjcmmnaqmelhkk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003061.489988-155-169101075324347/AnsiballZ_systemd_service.py'
Oct 09 09:44:21 compute-2 sudo[73008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:21 compute-2 python3.9[73010]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:44:21 compute-2 sudo[73008]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:22 compute-2 ceph-mon[5983]: pgmap v321: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:44:22 compute-2 sudo[73162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grftmnallwteugtnorxhdkdlzuzqagij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003062.1025076-155-7838955284814/AnsiballZ_systemd_service.py'
Oct 09 09:44:22 compute-2 sudo[73162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:22.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:22 compute-2 python3.9[73164]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:44:22 compute-2 sudo[73162]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:22 compute-2 sudo[73315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmsufkitmctqdzylafookheaqwevixwu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003062.6779602-155-49132023752/AnsiballZ_systemd_service.py'
Oct 09 09:44:22 compute-2 sudo[73315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:23 compute-2 python3.9[73317]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:44:23 compute-2 sudo[73315]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:23 compute-2 sudo[73468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zaeshocuriyiiuuswhrisnarleumdzak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003063.2502904-155-181669338948451/AnsiballZ_systemd_service.py'
Oct 09 09:44:23 compute-2 sudo[73468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000014s ======
Oct 09 09:44:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:23.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Oct 09 09:44:23 compute-2 python3.9[73470]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:44:23 compute-2 sudo[73468]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:23 compute-2 sudo[73622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akzoezfbfguhjsbosufxweiekkwevaxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003063.7949584-155-106205202518462/AnsiballZ_systemd_service.py'
Oct 09 09:44:23 compute-2 sudo[73622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:24 compute-2 ceph-mon[5983]: pgmap v322: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1 op/s
Oct 09 09:44:24 compute-2 python3.9[73624]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:44:24 compute-2 sudo[73622]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:24.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:24 compute-2 sudo[73776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-copjghdrlwnlnuivzauaexgcpurvznyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003064.3555403-155-269832962021680/AnsiballZ_systemd_service.py'
Oct 09 09:44:24 compute-2 sudo[73776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:24 compute-2 python3.9[73778]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:44:24 compute-2 sudo[73776]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:44:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:44:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:44:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:44:25 compute-2 sudo[73929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iahuxoxepqorrxlgmxippveijalhljdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003064.919626-155-214311158762178/AnsiballZ_systemd_service.py'
Oct 09 09:44:25 compute-2 sudo[73929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:25 compute-2 python3.9[73931]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:44:25 compute-2 sudo[73929]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:25.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:26 compute-2 sudo[74083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmjakinnqglzknuaarymkcznbnqoryjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003065.6878865-310-87349943185728/AnsiballZ_file.py'
Oct 09 09:44:26 compute-2 sudo[74083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:26 compute-2 ceph-mon[5983]: pgmap v323: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:44:26 compute-2 python3.9[74085]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:26 compute-2 sudo[74083]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:26.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:26 compute-2 sudo[74236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujbmkbklsyeensehaofwfqzlkinefydf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003066.3021953-310-203634767160856/AnsiballZ_file.py'
Oct 09 09:44:26 compute-2 sudo[74236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:26 compute-2 python3.9[74238]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:26 compute-2 sudo[74236]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:26 compute-2 sudo[74388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebntdnbvzyczezsgsltexqnyvohibfyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003066.7710311-310-47936026983041/AnsiballZ_file.py'
Oct 09 09:44:26 compute-2 sudo[74388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:27 compute-2 python3.9[74390]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:27 compute-2 sudo[74388]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:27 compute-2 sudo[74540]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqvqoolughciruwjaaacvzmjknxqsyut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003067.2378712-310-242525685948043/AnsiballZ_file.py'
Oct 09 09:44:27 compute-2 sudo[74540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:27 compute-2 python3.9[74542]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:27.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:27 compute-2 sudo[74540]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:27 compute-2 sudo[74693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzzjncjszdaqdpnssqyijwkdonkuaatn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003067.7034545-310-5174526793857/AnsiballZ_file.py'
Oct 09 09:44:27 compute-2 sudo[74693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:28 compute-2 python3.9[74695]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:28 compute-2 sudo[74693]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:28 compute-2 ceph-mon[5983]: pgmap v324: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1 op/s
Oct 09 09:44:28 compute-2 sudo[74846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tceoymygangvkfzwuoqnvnllaiohmxvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003068.156674-310-124172028348734/AnsiballZ_file.py'
Oct 09 09:44:28 compute-2 sudo[74846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:28.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:28 compute-2 python3.9[74848]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:28 compute-2 sudo[74846]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:28 compute-2 sudo[74998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tezkcphjzprbtyxhnxosuehcrllruffj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003068.615168-310-223815674041152/AnsiballZ_file.py'
Oct 09 09:44:28 compute-2 sudo[74998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:28 compute-2 python3.9[75000]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:28 compute-2 sudo[74998]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:29 compute-2 sudo[75124]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:44:29 compute-2 sudo[75124]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:44:29 compute-2 sudo[75124]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:29 compute-2 sudo[75175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soqlxqtmpkcmbkgjkzissbwfgbtwgtoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003069.10554-461-15502472465365/AnsiballZ_file.py'
Oct 09 09:44:29 compute-2 sudo[75175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:29 compute-2 python3.9[75177]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:29 compute-2 sudo[75175]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:29.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:29 compute-2 sudo[75328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkrjdiawowptqwwctrrdrjjbmfsxcmyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003069.5502946-461-119156511159984/AnsiballZ_file.py'
Oct 09 09:44:29 compute-2 sudo[75328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:29 compute-2 python3.9[75330]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:29 compute-2 sudo[75328]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:44:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:44:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:44:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:44:30 compute-2 ceph-mon[5983]: pgmap v325: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:44:30 compute-2 sudo[75481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ysljkddthgyoregmuejwwevprifnopfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003070.0030072-461-62302748377496/AnsiballZ_file.py'
Oct 09 09:44:30 compute-2 sudo[75481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:30 compute-2 python3.9[75483]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:30 compute-2 sudo[75481]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:30.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:30 compute-2 sudo[75633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omlppacrczoxzevkfklffvizyuhezxar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003070.4815378-461-222194710208581/AnsiballZ_file.py'
Oct 09 09:44:30 compute-2 sudo[75633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:30 compute-2 python3.9[75635]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:30 compute-2 sudo[75633]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:31 compute-2 sudo[75785]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjnoqczwpjkswdnvbepjqpgfervjmakw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003070.9632118-461-75969643838460/AnsiballZ_file.py'
Oct 09 09:44:31 compute-2 sudo[75785]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:31 compute-2 python3.9[75787]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:31 compute-2 sudo[75785]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:31 compute-2 sudo[75938]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jerxlbvyiofwcklltfrzxprboecyrddp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003071.409249-461-46499295325180/AnsiballZ_file.py'
Oct 09 09:44:31 compute-2 sudo[75938]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:31.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:31 compute-2 python3.9[75940]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:31 compute-2 sudo[75938]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:32 compute-2 sudo[76090]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wypkceouxunebgoauwwdozdksetuldah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003071.8515027-461-223261103579835/AnsiballZ_file.py'
Oct 09 09:44:32 compute-2 sudo[76090]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:32 compute-2 ceph-mon[5983]: pgmap v326: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:44:32 compute-2 python3.9[76092]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:44:32 compute-2 sudo[76090]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:32.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:32 compute-2 sudo[76243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ziushgaugxdpqvvatcnznwmpcwnbjgni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003072.647473-614-48429450380751/AnsiballZ_command.py'
Oct 09 09:44:32 compute-2 sudo[76243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:32 compute-2 python3.9[76245]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                              systemctl disable --now certmonger.service
                                              test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                            fi
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:44:33 compute-2 sudo[76243]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:33.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:33 compute-2 python3.9[76397]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 09 09:44:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:34 compute-2 sudo[76548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iybsralvqfecphrotgfnyeparyppuisw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003073.9406831-668-134881220268511/AnsiballZ_systemd_service.py'
Oct 09 09:44:34 compute-2 sudo[76548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:34 compute-2 ceph-mon[5983]: pgmap v327: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:44:34 compute-2 python3.9[76550]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 09:44:34 compute-2 systemd[1]: Reloading.
Oct 09 09:44:34 compute-2 systemd-rc-local-generator[76572]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:44:34 compute-2 systemd-sysv-generator[76575]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:44:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:34.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:34 compute-2 sudo[76548]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:34 compute-2 sudo[76736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbmjukoagbvrmfwfkzwjjjabnpfojqru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003074.7641156-692-38058269469411/AnsiballZ_command.py'
Oct 09 09:44:34 compute-2 sudo[76736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:44:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:44:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:44:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:44:35 compute-2 python3.9[76738]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:44:35 compute-2 sudo[76736]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:44:35 compute-2 sudo[76889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pscoiqljdnneqyhxeqxnzntymbbfkwex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003075.2110097-692-197749248512959/AnsiballZ_command.py'
Oct 09 09:44:35 compute-2 sudo[76889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:35 compute-2 python3.9[76891]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:44:35 compute-2 sudo[76889]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:35.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:35 compute-2 sudo[77043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-citwoemyxlmtpjygjmshmasqdcjlngcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003075.6491113-692-193885948827662/AnsiballZ_command.py'
Oct 09 09:44:35 compute-2 sudo[77043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:35 compute-2 python3.9[77045]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:44:36 compute-2 sudo[77043]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:36 compute-2 ceph-mon[5983]: pgmap v328: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:36 compute-2 sudo[77197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wyrvxaimerqpppgjrqfzrssixpppkobp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003076.1032689-692-209073650329031/AnsiballZ_command.py'
Oct 09 09:44:36 compute-2 sudo[77197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:36 compute-2 python3.9[77199]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:44:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:36.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:36 compute-2 sudo[77197]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:36 compute-2 sudo[77350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tsapuiptkqvdaroxoomwwlegcykyesbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003076.544559-692-150173313043559/AnsiballZ_command.py'
Oct 09 09:44:36 compute-2 sudo[77350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:36 compute-2 python3.9[77352]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:44:36 compute-2 sudo[77350]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:37 compute-2 ceph-mon[5983]: pgmap v329: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:44:37 compute-2 sudo[77503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnyqymypjcvkbppvjqheljhavwovfapj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003077.0053766-692-105488408618900/AnsiballZ_command.py'
Oct 09 09:44:37 compute-2 sudo[77503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:37 compute-2 python3.9[77505]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:44:37 compute-2 sudo[77503]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:37.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:37 compute-2 sudo[77657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpznrlsecaydgifliwsorfwsxabcmahn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003077.4659307-692-185342978027323/AnsiballZ_command.py'
Oct 09 09:44:37 compute-2 sudo[77657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:37 compute-2 python3.9[77659]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:44:37 compute-2 sudo[77657]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:38.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:38 compute-2 sudo[77820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqntthomixolfhkyihbrxglexrillqdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003078.3962371-854-65219033937720/AnsiballZ_getent.py'
Oct 09 09:44:38 compute-2 sudo[77820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:38 compute-2 podman[77785]: 2025-10-09 09:44:38.745441972 +0000 UTC m=+0.041538030 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent)
Oct 09 09:44:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:38 compute-2 python3.9[77827]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct 09 09:44:38 compute-2 sudo[77820]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:39 compute-2 sudo[77981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqtswoluzagmbgjeedxfvitxzfchbpeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003079.0862825-877-5705622460811/AnsiballZ_group.py'
Oct 09 09:44:39 compute-2 sudo[77981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:39 compute-2 ceph-mon[5983]: pgmap v330: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:39 compute-2 python3.9[77983]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 09 09:44:39 compute-2 groupadd[77985]: group added to /etc/group: name=libvirt, GID=42473
Oct 09 09:44:39 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 09:44:39 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 09:44:39 compute-2 groupadd[77985]: group added to /etc/gshadow: name=libvirt
Oct 09 09:44:39 compute-2 groupadd[77985]: new group: name=libvirt, GID=42473
Oct 09 09:44:39 compute-2 sudo[77981]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:44:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:39.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:44:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:44:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:44:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:44:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:44:40 compute-2 sudo[78141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wclyqvsepidtgghdlzlabmbnpfgbwfxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003079.785118-901-72151630580388/AnsiballZ_user.py'
Oct 09 09:44:40 compute-2 sudo[78141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:40 compute-2 python3.9[78144]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 09 09:44:40 compute-2 useradd[78146]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Oct 09 09:44:40 compute-2 sudo[78141]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:40.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:40 compute-2 sudo[78302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klrjclwflkvzrcyqcxqewxjbvjpljlvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003080.7796116-934-28874667321698/AnsiballZ_setup.py'
Oct 09 09:44:40 compute-2 sudo[78302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:41 compute-2 python3.9[78304]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:44:41 compute-2 sudo[78302]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:41 compute-2 ceph-mon[5983]: pgmap v331: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:41.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:41 compute-2 sudo[78387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhncmyasagceccolxjgqtzjmstsmevnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003080.7796116-934-28874667321698/AnsiballZ_dnf.py'
Oct 09 09:44:41 compute-2 sudo[78387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:44:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:41 compute-2 python3.9[78389]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:44:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:42.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:43 compute-2 ceph-mon[5983]: pgmap v332: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:44:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:43.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000014s ======
Oct 09 09:44:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:44.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Oct 09 09:44:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:44:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:44:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:44:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:44:45 compute-2 ceph-mon[5983]: pgmap v333: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:45.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:46.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:47 compute-2 ceph-mon[5983]: pgmap v334: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:44:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:47.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:48.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:49 compute-2 podman[78409]: 2025-10-09 09:44:49.225377899 +0000 UTC m=+0.056527806 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 09 09:44:49 compute-2 sudo[78432]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:44:49 compute-2 sudo[78432]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:44:49 compute-2 sudo[78432]: pam_unix(sudo:session): session closed for user root
Oct 09 09:44:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:49 compute-2 ceph-mon[5983]: pgmap v335: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:49.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:44:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:44:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:44:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:44:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:44:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:50.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:44:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:44:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:51 compute-2 ceph-mon[5983]: pgmap v336: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:51.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:52.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:53 compute-2 ceph-mon[5983]: pgmap v337: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:44:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:53.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:54.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:44:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:44:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:44:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:44:55 compute-2 ceph-mon[5983]: pgmap v338: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000014s ======
Oct 09 09:44:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:55.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Oct 09 09:44:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:56.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:57 compute-2 ceph-mon[5983]: pgmap v339: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:44:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:57.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000014s ======
Oct 09 09:44:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:44:58.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Oct 09 09:44:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:44:59 compute-2 ceph-mon[5983]: pgmap v340: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:44:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:44:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:44:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:44:59.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:44:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:44:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:44:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:44:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:45:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:45:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:44:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:45:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:45:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000014s ======
Oct 09 09:45:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:00.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000014s
Oct 09 09:45:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:01 compute-2 ceph-mon[5983]: pgmap v341: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:01.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:45:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:02.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:45:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:03 compute-2 ceph-mon[5983]: pgmap v342: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:03.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:04 compute-2 kernel: SELinux:  Converting 472 SID table entries...
Oct 09 09:45:04 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Oct 09 09:45:04 compute-2 kernel: SELinux:  policy capability open_perms=1
Oct 09 09:45:04 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Oct 09 09:45:04 compute-2 kernel: SELinux:  policy capability always_check_network=0
Oct 09 09:45:04 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 09 09:45:04 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 09 09:45:04 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 09 09:45:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000013s ======
Oct 09 09:45:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:04.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000013s
Oct 09 09:45:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:45:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:45:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:45:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:45:05 compute-2 ceph-mon[5983]: pgmap v343: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:45:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:05.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:06.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:07 compute-2 ceph-mon[5983]: pgmap v344: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:07.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:08.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:09 compute-2 dbus-broker-launch[792]: avc:  op=load_policy lsm=selinux seqno=3 res=1
Oct 09 09:45:09 compute-2 podman[78667]: 2025-10-09 09:45:09.211897101 +0000 UTC m=+0.042806072 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 09 09:45:09 compute-2 sudo[78684]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:45:09 compute-2 sudo[78684]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:45:09 compute-2 sudo[78684]: pam_unix(sudo:session): session closed for user root
Oct 09 09:45:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:09 compute-2 ceph-mon[5983]: pgmap v345: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:09.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:45:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:45:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:45:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:45:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:45:10.265 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:45:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:45:10.265 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:45:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:45:10.265 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:45:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:10.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:11 compute-2 kernel: SELinux:  Converting 472 SID table entries...
Oct 09 09:45:11 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Oct 09 09:45:11 compute-2 kernel: SELinux:  policy capability open_perms=1
Oct 09 09:45:11 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Oct 09 09:45:11 compute-2 kernel: SELinux:  policy capability always_check_network=0
Oct 09 09:45:11 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 09 09:45:11 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 09 09:45:11 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 09 09:45:11 compute-2 ceph-mon[5983]: pgmap v346: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:11.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:12.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:13 compute-2 ceph-mon[5983]: pgmap v347: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:45:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:13.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:45:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:13 compute-2 sudo[78723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:45:13 compute-2 dbus-broker-launch[792]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Oct 09 09:45:13 compute-2 sudo[78723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:45:13 compute-2 sudo[78723]: pam_unix(sudo:session): session closed for user root
Oct 09 09:45:13 compute-2 sudo[78748]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 09 09:45:13 compute-2 sudo[78748]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:45:14 compute-2 sudo[78748]: pam_unix(sudo:session): session closed for user root
Oct 09 09:45:14 compute-2 sudo[78792]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:45:14 compute-2 sudo[78792]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:45:14 compute-2 sudo[78792]: pam_unix(sudo:session): session closed for user root
Oct 09 09:45:14 compute-2 sudo[78817]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:45:14 compute-2 sudo[78817]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:45:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 09:45:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                          ** DB Stats **
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Cumulative writes: 2475 writes, 14K keys, 2475 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.06 MB/s
                                          Cumulative WAL: 2475 writes, 2475 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s
                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                          Interval writes: 2475 writes, 14K keys, 2475 commit groups, 1.0 writes per commit group, ingest: 38.77 MB, 0.06 MB/s
                                          Interval WAL: 2475 writes, 2475 syncs, 1.00 writes per sync, written: 0.04 GB, 0.06 MB/s
                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                          
                                          ** Compaction Stats [default] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    387.5      0.05              0.04         6    0.009       0      0       0.0       0.0
                                            L6      1/0   11.02 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.0    434.2    375.1      0.17              0.10         5    0.034     19K   2242       0.0       0.0
                                           Sum      1/0   11.02 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.0    328.7    378.1      0.22              0.14        11    0.020     19K   2242       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.0    330.1    379.6      0.22              0.14        10    0.022     19K   2242       0.0       0.0
                                          
                                          ** Compaction Stats [default] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    434.2    375.1      0.17              0.10         5    0.034     19K   2242       0.0       0.0
                                          High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    393.9      0.05              0.04         5    0.011       0      0       0.0       0.0
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 600.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.021, interval 0.021
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds
                                          Interval compaction: 0.08 GB write, 0.14 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x5647939f1350#2 capacity: 304.00 MB usage: 2.23 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.5e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(165,2.03 MB,0.668094%) FilterBlock(11,66.42 KB,0.0213372%) IndexBlock(11,134.28 KB,0.0431362%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [default] **
Oct 09 09:45:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:14.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:14 compute-2 sudo[78817]: pam_unix(sudo:session): session closed for user root
Oct 09 09:45:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:45:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:45:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:45:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:45:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:45:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:45:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:45:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:45:15 compute-2 ceph-mon[5983]: pgmap v348: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:45:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:45:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:45:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:45:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:45:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:45:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:45:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:15.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:16.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:17 compute-2 ceph-mon[5983]: pgmap v349: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:45:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:17.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:45:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:18 compute-2 sudo[78873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:45:18 compute-2 sudo[78873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:45:18 compute-2 sudo[78873]: pam_unix(sudo:session): session closed for user root
Oct 09 09:45:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:45:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:18.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:45:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:45:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:45:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:45:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:19.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:45:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:45:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:45:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:45:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:45:20 compute-2 ceph-mon[5983]: pgmap v350: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:45:20 compute-2 podman[78901]: 2025-10-09 09:45:20.238743038 +0000 UTC m=+0.061789498 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 09 09:45:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:45:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:20.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:45:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:21.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:22 compute-2 ceph-mon[5983]: pgmap v351: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:22.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:45:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:23.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:45:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:24 compute-2 ceph-mon[5983]: pgmap v352: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:45:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:24.507 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:45:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:45:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:45:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:45:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:45:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:45:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:25.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:45:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:26 compute-2 ceph-mon[5983]: pgmap v353: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:26.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:45:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:27.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:45:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:28 compute-2 ceph-mon[5983]: pgmap v354: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:28.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:29 compute-2 sudo[87949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:45:29 compute-2 sudo[87949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:45:29 compute-2 sudo[87949]: pam_unix(sudo:session): session closed for user root
Oct 09 09:45:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:45:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:29.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:45:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:45:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:45:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:45:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:45:30 compute-2 ceph-mon[5983]: pgmap v355: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:30.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:31.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:32 compute-2 ceph-mon[5983]: pgmap v356: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:32.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:45:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:33.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:45:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:34 compute-2 ceph-mon[5983]: pgmap v357: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:34.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:45:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:45:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:45:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:45:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:45:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:35.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:36 compute-2 ceph-mon[5983]: pgmap v358: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:36.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:37.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:38 compute-2 ceph-mon[5983]: pgmap v359: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:38.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:39.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:45:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:45:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:45:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:45:40 compute-2 ceph-mon[5983]: pgmap v360: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:40 compute-2 podman[95719]: 2025-10-09 09:45:40.200385388 +0000 UTC m=+0.036623841 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 09 09:45:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:45:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:40.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:45:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:41.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:42 compute-2 ceph-mon[5983]: pgmap v361: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:42.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:43.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:45:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:45:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:45:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:45:44 compute-2 ceph-mon[5983]: pgmap v362: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:45:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:44.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:45:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:45 compute-2 ceph-mon[5983]: pgmap v363: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:45.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:46 compute-2 kernel: SELinux:  Converting 473 SID table entries...
Oct 09 09:45:46 compute-2 kernel: SELinux:  policy capability network_peer_controls=1
Oct 09 09:45:46 compute-2 kernel: SELinux:  policy capability open_perms=1
Oct 09 09:45:46 compute-2 kernel: SELinux:  policy capability extended_socket_class=1
Oct 09 09:45:46 compute-2 kernel: SELinux:  policy capability always_check_network=0
Oct 09 09:45:46 compute-2 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct 09 09:45:46 compute-2 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct 09 09:45:46 compute-2 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct 09 09:45:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:46.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:46 compute-2 groupadd[95756]: group added to /etc/group: name=dnsmasq, GID=992
Oct 09 09:45:46 compute-2 groupadd[95756]: group added to /etc/gshadow: name=dnsmasq
Oct 09 09:45:46 compute-2 groupadd[95756]: new group: name=dnsmasq, GID=992
Oct 09 09:45:46 compute-2 useradd[95763]: new user: name=dnsmasq, UID=992, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Oct 09 09:45:46 compute-2 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Oct 09 09:45:46 compute-2 dbus-broker-launch[792]: avc:  op=load_policy lsm=selinux seqno=5 res=1
Oct 09 09:45:46 compute-2 dbus-broker-launch[791]: Noticed file-system modification, trigger reload.
Oct 09 09:45:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:47 compute-2 groupadd[95776]: group added to /etc/group: name=clevis, GID=991
Oct 09 09:45:47 compute-2 groupadd[95776]: group added to /etc/gshadow: name=clevis
Oct 09 09:45:47 compute-2 groupadd[95776]: new group: name=clevis, GID=991
Oct 09 09:45:47 compute-2 useradd[95783]: new user: name=clevis, UID=991, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Oct 09 09:45:47 compute-2 usermod[95793]: add 'clevis' to group 'tss'
Oct 09 09:45:47 compute-2 usermod[95793]: add 'clevis' to shadow group 'tss'
Oct 09 09:45:47 compute-2 ceph-mon[5983]: pgmap v364: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:45:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:47.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:45:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:48.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:48 compute-2 polkitd[1121]: Reloading rules
Oct 09 09:45:48 compute-2 polkitd[1121]: Collecting garbage unconditionally...
Oct 09 09:45:48 compute-2 polkitd[1121]: Loading rules from directory /etc/polkit-1/rules.d
Oct 09 09:45:48 compute-2 polkitd[1121]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 09 09:45:48 compute-2 polkitd[1121]: Finished loading, compiling and executing 4 rules
Oct 09 09:45:48 compute-2 polkitd[1121]: Reloading rules
Oct 09 09:45:48 compute-2 polkitd[1121]: Collecting garbage unconditionally...
Oct 09 09:45:48 compute-2 polkitd[1121]: Loading rules from directory /etc/polkit-1/rules.d
Oct 09 09:45:48 compute-2 polkitd[1121]: Loading rules from directory /usr/share/polkit-1/rules.d
Oct 09 09:45:48 compute-2 polkitd[1121]: Finished loading, compiling and executing 4 rules
Oct 09 09:45:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:45:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:45:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:45:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:45:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:49 compute-2 ceph-mon[5983]: pgmap v365: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:49 compute-2 sudo[95981]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:45:49 compute-2 sudo[95981]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:45:49 compute-2 sudo[95981]: pam_unix(sudo:session): session closed for user root
Oct 09 09:45:49 compute-2 groupadd[96008]: group added to /etc/group: name=ceph, GID=167
Oct 09 09:45:49 compute-2 groupadd[96008]: group added to /etc/gshadow: name=ceph
Oct 09 09:45:49 compute-2 groupadd[96008]: new group: name=ceph, GID=167
Oct 09 09:45:49 compute-2 useradd[96014]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Oct 09 09:45:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:49.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:45:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:50.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:51 compute-2 podman[96024]: 2025-10-09 09:45:51.115624965 +0000 UTC m=+0.058748716 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller)
Oct 09 09:45:51 compute-2 ceph-mon[5983]: pgmap v366: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:51.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:51 compute-2 systemd[1]: Stopping OpenSSH server daemon...
Oct 09 09:45:51 compute-2 sshd[1246]: Received signal 15; terminating.
Oct 09 09:45:51 compute-2 systemd[1]: sshd.service: Deactivated successfully.
Oct 09 09:45:51 compute-2 systemd[1]: Stopped OpenSSH server daemon.
Oct 09 09:45:51 compute-2 systemd[1]: sshd.service: Consumed 894ms CPU time, read 2.7M from disk, written 0B to disk.
Oct 09 09:45:51 compute-2 systemd[1]: Stopped target sshd-keygen.target.
Oct 09 09:45:51 compute-2 systemd[1]: Stopping sshd-keygen.target...
Oct 09 09:45:51 compute-2 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 09 09:45:51 compute-2 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 09 09:45:51 compute-2 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct 09 09:45:51 compute-2 systemd[1]: Reached target sshd-keygen.target.
Oct 09 09:45:51 compute-2 systemd[1]: Starting OpenSSH server daemon...
Oct 09 09:45:51 compute-2 sshd[96688]: Server listening on 0.0.0.0 port 22.
Oct 09 09:45:51 compute-2 sshd[96688]: Server listening on :: port 22.
Oct 09 09:45:51 compute-2 systemd[1]: Started OpenSSH server daemon.
Oct 09 09:45:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:52.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:52 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 09 09:45:52 compute-2 systemd[1]: Starting man-db-cache-update.service...
Oct 09 09:45:52 compute-2 systemd[1]: Reloading.
Oct 09 09:45:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:45:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:45:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:45:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:45:53 compute-2 systemd-rc-local-generator[96945]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:45:53 compute-2 systemd-sysv-generator[96948]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:45:53 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 09 09:45:53 compute-2 ceph-mon[5983]: pgmap v367: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:45:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:53.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:45:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:53 compute-2 systemd[1]: Starting PackageKit Daemon...
Oct 09 09:45:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:53 compute-2 PackageKit[97844]: daemon start
Oct 09 09:45:53 compute-2 systemd[1]: Started PackageKit Daemon.
Oct 09 09:45:54 compute-2 sudo[78387]: pam_unix(sudo:session): session closed for user root
Oct 09 09:45:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:54.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:55 compute-2 ceph-mon[5983]: pgmap v368: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:55.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:45:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:56.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:45:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:57 compute-2 ceph-mon[5983]: pgmap v369: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:45:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:45:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:57.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:45:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:45:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:45:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:45:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:45:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:45:58 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 09 09:45:58 compute-2 systemd[1]: Finished man-db-cache-update.service.
Oct 09 09:45:58 compute-2 systemd[1]: man-db-cache-update.service: Consumed 6.879s CPU time.
Oct 09 09:45:58 compute-2 systemd[1]: run-r499906ddeaca40afaafa798fed3f30ad.service: Deactivated successfully.
Oct 09 09:45:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:45:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:45:58.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:45:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:45:59 compute-2 ceph-mon[5983]: pgmap v370: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:45:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:45:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:45:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:45:59.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:45:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:45:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:45:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:45:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:00.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:01 compute-2 ceph-mon[5983]: pgmap v371: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:01.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:02.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:46:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:46:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:46:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:46:03 compute-2 ceph-mon[5983]: pgmap v372: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:46:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:03.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:46:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:04.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:46:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:46:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:05 compute-2 ceph-mon[5983]: pgmap v373: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:05.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:06.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 09:46:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 5980 writes, 26K keys, 5980 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5980 writes, 983 syncs, 6.08 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5980 writes, 26K keys, 5980 commit groups, 1.0 writes per commit group, ingest: 19.15 MB, 0.03 MB/s
                                           Interval WAL: 5980 writes, 983 syncs, 6.08 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a590#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a590#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a590#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 09 09:46:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:07 compute-2 ceph-mon[5983]: pgmap v374: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:46:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:07.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:46:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:46:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:46:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:46:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:08.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:09 compute-2 sudo[105235]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:46:09 compute-2 sudo[105235]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:46:09 compute-2 sudo[105235]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:09 compute-2 ceph-mon[5983]: pgmap v375: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:46:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:09.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:46:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:46:10.265 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:46:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:46:10.266 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:46:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:46:10.266 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:46:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:10.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:11 compute-2 podman[105261]: 2025-10-09 09:46:11.207762779 +0000 UTC m=+0.041259387 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 09 09:46:11 compute-2 ceph-mon[5983]: pgmap v376: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:11.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:12.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:46:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:46:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:46:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:46:13 compute-2 ceph-mon[5983]: pgmap v377: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:46:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:13.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:14.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:15 compute-2 ceph-mon[5983]: pgmap v378: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct 09 09:46:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct 09 09:46:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct 09 09:46:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Oct 09 09:46:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Oct 09 09:46:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Oct 09 09:46:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Oct 09 09:46:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Oct 09 09:46:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:15.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:46:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:16.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:46:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:17 compute-2 ceph-mon[5983]: pgmap v379: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:46:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:46:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:17.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:46:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:46:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:46:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:46:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:46:18 compute-2 sudo[105285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:46:18 compute-2 sudo[105285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:46:18 compute-2 sudo[105285]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:18 compute-2 sudo[105310]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:46:18 compute-2 sudo[105310]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:46:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:18.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:18 compute-2 sudo[105310]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:19 compute-2 ceph-mon[5983]: pgmap v380: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:46:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:46:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:46:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:46:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:46:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:46:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:46:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:46:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:19.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:20 compute-2 sudo[105491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oodrlrypgwyvdpwmajaasbowrqmmskph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003180.0777712-971-123712616062393/AnsiballZ_systemd.py'
Oct 09 09:46:20 compute-2 sudo[105491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:46:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:20.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:46:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:20 compute-2 python3.9[105493]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 09:46:20 compute-2 systemd[1]: Reloading.
Oct 09 09:46:20 compute-2 systemd-sysv-generator[105522]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:46:20 compute-2 systemd-rc-local-generator[105518]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:46:21 compute-2 sudo[105491]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:21 compute-2 podman[105555]: 2025-10-09 09:46:21.224379557 +0000 UTC m=+0.059395956 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 09 09:46:21 compute-2 sudo[105703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imojuclpwlyagbbuzshixgserfpmurrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003181.1749942-971-218097197621976/AnsiballZ_systemd.py'
Oct 09 09:46:21 compute-2 sudo[105703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:21 compute-2 python3.9[105705]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 09:46:21 compute-2 systemd[1]: Reloading.
Oct 09 09:46:21 compute-2 ceph-mon[5983]: pgmap v381: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:21 compute-2 systemd-rc-local-generator[105728]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:46:21 compute-2 systemd-sysv-generator[105731]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:46:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.004000040s ======
Oct 09 09:46:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:21.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.004000040s
Oct 09 09:46:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:21 compute-2 sudo[105703]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:22 compute-2 sudo[105895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lydqhufwgoiqasbxjhnlsnfusbavaitu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003181.988553-971-45356671483193/AnsiballZ_systemd.py'
Oct 09 09:46:22 compute-2 sudo[105895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:22 compute-2 python3.9[105897]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 09:46:22 compute-2 systemd[1]: Reloading.
Oct 09 09:46:22 compute-2 systemd-sysv-generator[105923]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:46:22 compute-2 systemd-rc-local-generator[105920]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:46:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:22.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:22 compute-2 sudo[105895]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:22 compute-2 sudo[106086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmjefhuahtatqqqpxlnxjhxuukbtiiyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003182.7911253-971-57580127955774/AnsiballZ_systemd.py'
Oct 09 09:46:22 compute-2 sudo[106086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:46:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:46:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:46:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:46:23 compute-2 sudo[106089]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:46:23 compute-2 sudo[106089]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:46:23 compute-2 sudo[106089]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:23 compute-2 python3.9[106088]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 09:46:23 compute-2 systemd[1]: Reloading.
Oct 09 09:46:23 compute-2 systemd-rc-local-generator[106135]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:46:23 compute-2 systemd-sysv-generator[106139]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:46:23 compute-2 sudo[106086]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:23.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:23 compute-2 ceph-mon[5983]: pgmap v382: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 0 B/s wr, 165 op/s
Oct 09 09:46:23 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:46:23 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:46:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:24 compute-2 sudo[106302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccaveqvkqmwhrgwmrinmwetepfgfwggc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003184.3652184-1058-182692579112807/AnsiballZ_systemd.py'
Oct 09 09:46:24 compute-2 sudo[106302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:46:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:24.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:46:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:24 compute-2 python3.9[106304]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:24 compute-2 systemd[1]: Reloading.
Oct 09 09:46:24 compute-2 systemd-sysv-generator[106331]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:46:24 compute-2 systemd-rc-local-generator[106328]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:46:25 compute-2 sudo[106302]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:25 compute-2 sudo[106492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mquojwuafddleizdwniwgyyjrpgnrbph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003185.1976502-1058-187712041215610/AnsiballZ_systemd.py'
Oct 09 09:46:25 compute-2 sudo[106492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:25 compute-2 python3.9[106494]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:25 compute-2 systemd[1]: Reloading.
Oct 09 09:46:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:25.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:25 compute-2 systemd-sysv-generator[106521]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:46:25 compute-2 systemd-rc-local-generator[106518]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:46:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:25 compute-2 ceph-mon[5983]: pgmap v383: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 0 B/s wr, 164 op/s
Oct 09 09:46:25 compute-2 sudo[106492]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:26 compute-2 sudo[106684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-carxtuoegwclefgajalpnagutiorkoyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003186.0320785-1058-74111016600978/AnsiballZ_systemd.py'
Oct 09 09:46:26 compute-2 sudo[106684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:26.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:26 compute-2 python3.9[106686]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:26 compute-2 systemd[1]: Reloading.
Oct 09 09:46:26 compute-2 systemd-rc-local-generator[106710]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:46:26 compute-2 systemd-sysv-generator[106713]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:46:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:26 compute-2 sudo[106684]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:27 compute-2 sudo[106873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsgqweqflhshnhctheordmlpgoscxotr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003187.049733-1058-273065719009865/AnsiballZ_systemd.py'
Oct 09 09:46:27 compute-2 sudo[106873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:27 compute-2 python3.9[106875]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:27 compute-2 sudo[106873]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:46:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:27.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:46:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:27 compute-2 sudo[107029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-circqdcqhdvmpkjrdimrqlgenpihnars ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003187.6358426-1058-97812358717297/AnsiballZ_systemd.py'
Oct 09 09:46:27 compute-2 sudo[107029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:27 compute-2 ceph-mon[5983]: pgmap v384: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 0 B/s wr, 165 op/s
Oct 09 09:46:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:46:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:46:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:46:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:46:28 compute-2 python3.9[107031]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:28 compute-2 systemd[1]: Reloading.
Oct 09 09:46:28 compute-2 systemd-rc-local-generator[107060]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:46:28 compute-2 systemd-sysv-generator[107064]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:46:28 compute-2 sudo[107029]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:46:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:28.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:46:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:29 compute-2 sudo[107220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdwanacbubfdxwfjbqlkcgcapxqhbamw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003188.9705515-1166-148728499328331/AnsiballZ_systemd.py'
Oct 09 09:46:29 compute-2 sudo[107220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:29 compute-2 python3.9[107222]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct 09 09:46:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:29 compute-2 systemd[1]: Reloading.
Oct 09 09:46:29 compute-2 systemd-rc-local-generator[107250]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:46:29 compute-2 systemd-sysv-generator[107257]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:46:29 compute-2 sudo[107262]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:46:29 compute-2 sudo[107262]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:46:29 compute-2 sudo[107262]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:29 compute-2 systemd[1]: Listening on libvirt proxy daemon socket.
Oct 09 09:46:29 compute-2 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct 09 09:46:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:46:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:29.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:46:29 compute-2 sudo[107220]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:29 compute-2 ceph-mon[5983]: pgmap v385: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 0 B/s wr, 164 op/s
Oct 09 09:46:30 compute-2 sudo[107440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbnuxinniwgxmmlcvxxqttfffmohqima ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003190.040821-1190-99985616348854/AnsiballZ_systemd.py'
Oct 09 09:46:30 compute-2 sudo[107440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:30 compute-2 python3.9[107442]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:30 compute-2 sudo[107440]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:30.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:30 compute-2 sudo[107595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-citsysjgbejyowqyundyrybsebnyrjhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003190.637092-1190-31324977577212/AnsiballZ_systemd.py'
Oct 09 09:46:30 compute-2 sudo[107595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:31 compute-2 python3.9[107597]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:31 compute-2 sudo[107595]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:31 compute-2 sudo[107750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pchkvfkkoglcrjmopusskiezlroppsex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003191.2105656-1190-109988652852439/AnsiballZ_systemd.py'
Oct 09 09:46:31 compute-2 sudo[107750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:31 compute-2 python3.9[107752]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:31 compute-2 sudo[107750]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:31.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:31 compute-2 ceph-mon[5983]: pgmap v386: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 99 KiB/s rd, 0 B/s wr, 164 op/s
Oct 09 09:46:31 compute-2 sudo[107906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lensigfbsevuejxdhgviwjybbiqtrbxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003191.7884243-1190-160613562949096/AnsiballZ_systemd.py'
Oct 09 09:46:31 compute-2 sudo[107906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:32 compute-2 python3.9[107908]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:32 compute-2 sudo[107906]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:32 compute-2 sudo[108065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-disrsfkamgafugftyrbqudtoqfqabupc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003192.3797526-1190-139816677038821/AnsiballZ_systemd.py'
Oct 09 09:46:32 compute-2 sudo[108065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:32.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:32 compute-2 python3.9[108067]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:32 compute-2 sudo[108065]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:46:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:46:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:46:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:46:33 compute-2 sudo[108220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhhgteigmjwsdqpslnekvhwboiurvohv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003192.9694705-1190-194748829270473/AnsiballZ_systemd.py'
Oct 09 09:46:33 compute-2 sudo[108220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:33 compute-2 python3.9[108222]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:33 compute-2 sudo[108220]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:33.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:33 compute-2 sudo[108376]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhuwqwbcprmzqtiywkloqjecyaxlfpcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003193.5553956-1190-280484771340909/AnsiballZ_systemd.py'
Oct 09 09:46:33 compute-2 sudo[108376]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:33 compute-2 ceph-mon[5983]: pgmap v387: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 0 B/s wr, 165 op/s
Oct 09 09:46:33 compute-2 python3.9[108378]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:34 compute-2 sudo[108376]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:34 compute-2 sudo[108532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttbpczdlzvfiuymzdwavshncvlmpyqaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003194.1475804-1190-247485946675108/AnsiballZ_systemd.py'
Oct 09 09:46:34 compute-2 sudo[108532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:34.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:34 compute-2 python3.9[108534]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:34 compute-2 sudo[108532]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:34 compute-2 sudo[108687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrzpedugpvfjajskacololmxoyirnpgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003194.7447205-1190-70129707224619/AnsiballZ_systemd.py'
Oct 09 09:46:34 compute-2 sudo[108687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:46:35 compute-2 python3.9[108689]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:35 compute-2 sudo[108687]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:35 compute-2 sudo[108843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tihwdxhljlxmfkyrxjmvkgildefxunsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003195.3223283-1190-128626052639503/AnsiballZ_systemd.py'
Oct 09 09:46:35 compute-2 sudo[108843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:35.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:35 compute-2 python3.9[108845]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:35 compute-2 sudo[108843]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:35 compute-2 ceph-mon[5983]: pgmap v388: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:36 compute-2 sudo[108998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnpupmimhhpfzubbykggdedcqqtkfzsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003195.952924-1190-107768814158748/AnsiballZ_systemd.py'
Oct 09 09:46:36 compute-2 sudo[108998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:36 compute-2 python3.9[109001]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:36 compute-2 sudo[108998]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:36.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:36 compute-2 sudo[109154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckxxyplwptujcezhfwvqogiryhnrsogm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003196.5685115-1190-247949655140188/AnsiballZ_systemd.py'
Oct 09 09:46:36 compute-2 sudo[109154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:37 compute-2 python3.9[109156]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:37 compute-2 sudo[109154]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:37 compute-2 sudo[109309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxjwcbqplpwmlpyrupaeumgscnsvdulr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003197.1830297-1190-197005506964844/AnsiballZ_systemd.py'
Oct 09 09:46:37 compute-2 sudo[109309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:37 compute-2 python3.9[109311]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:37 compute-2 sudo[109309]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:46:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:37.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:46:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:37 compute-2 ceph-mon[5983]: pgmap v389: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:46:37 compute-2 sudo[109465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eatfiwavtfxhnavovrarlongvrjmueaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003197.7840025-1190-206864710032033/AnsiballZ_systemd.py'
Oct 09 09:46:37 compute-2 sudo[109465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:46:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:46:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:46:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:46:38 compute-2 python3.9[109467]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct 09 09:46:38 compute-2 sudo[109465]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:38.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:46:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:39.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:46:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:39 compute-2 ceph-mon[5983]: pgmap v390: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:40 compute-2 sudo[109623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srrhwzxsnwmhzqyrdbekfggtdiznnauk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003200.287396-1495-29165980175687/AnsiballZ_file.py'
Oct 09 09:46:40 compute-2 sudo[109623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:40.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:40 compute-2 python3.9[109625]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:46:40 compute-2 sudo[109623]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:40 compute-2 sudo[109775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsbylmqahptzurbasuaokgfknubjiiqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003200.765002-1495-58679189625530/AnsiballZ_file.py'
Oct 09 09:46:40 compute-2 sudo[109775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:41 compute-2 python3.9[109777]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:46:41 compute-2 sudo[109775]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:41 compute-2 sudo[109935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znefnzsgrkdxdlogvpymippzlffqallc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003201.2041345-1495-38137479455296/AnsiballZ_file.py'
Oct 09 09:46:41 compute-2 sudo[109935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:41 compute-2 podman[109901]: 2025-10-09 09:46:41.419750586 +0000 UTC m=+0.038450181 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:46:41 compute-2 python3.9[109945]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:46:41 compute-2 sudo[109935]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:41.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:41 compute-2 sudo[110096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ceseynrraneamlmqbxifviyxccqkleqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003201.6948695-1495-74973363824385/AnsiballZ_file.py'
Oct 09 09:46:41 compute-2 sudo[110096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:41 compute-2 ceph-mon[5983]: pgmap v391: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:42 compute-2 python3.9[110098]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:46:42 compute-2 sudo[110096]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:42 compute-2 sudo[110249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srqpvucatelampieswtobiwmacdqdojx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003202.147565-1495-29824096740274/AnsiballZ_file.py'
Oct 09 09:46:42 compute-2 sudo[110249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:42 compute-2 python3.9[110251]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:46:42 compute-2 sudo[110249]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:46:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:42.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:46:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:42 compute-2 sudo[110401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpjamedkobpabomecrlwseofibzsdmif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003202.6245544-1495-241202909268370/AnsiballZ_file.py'
Oct 09 09:46:42 compute-2 sudo[110401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:42 compute-2 python3.9[110403]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:46:42 compute-2 sudo[110401]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:46:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:46:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:46:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:46:43 compute-2 sudo[110554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfbjxhyziuohciaptrvlbumhbvtrecku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003203.2785082-1624-222909275187416/AnsiballZ_stat.py'
Oct 09 09:46:43 compute-2 sudo[110554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:43.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:43 compute-2 python3.9[110556]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:43 compute-2 sudo[110554]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:43 compute-2 ceph-mon[5983]: pgmap v392: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:46:44 compute-2 sudo[110679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kydbpyzmfbqkytbiabzjuaiazxfdtqja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003203.2785082-1624-222909275187416/AnsiballZ_copy.py'
Oct 09 09:46:44 compute-2 sudo[110679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:44 compute-2 python3.9[110682]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003203.2785082-1624-222909275187416/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:44 compute-2 sudo[110679]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:44 compute-2 sudo[110832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxzcfhyexfnzeifcdlzlaprtkithgqtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003204.4076257-1624-65265225203423/AnsiballZ_stat.py'
Oct 09 09:46:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:44.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:44 compute-2 sudo[110832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:44 compute-2 python3.9[110834]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:44 compute-2 sudo[110832]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:44 compute-2 sudo[110957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwwnsgzdyjfwmtzayrkerijanoyxiztp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003204.4076257-1624-65265225203423/AnsiballZ_copy.py'
Oct 09 09:46:44 compute-2 sudo[110957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:45 compute-2 python3.9[110959]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003204.4076257-1624-65265225203423/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:45 compute-2 sudo[110957]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:45 compute-2 sudo[111109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ocqofbvkzkhoxiwxnawtqdlnwfdbtmfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003205.2692983-1624-255031006266652/AnsiballZ_stat.py'
Oct 09 09:46:45 compute-2 sudo[111109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:45 compute-2 python3.9[111111]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:45 compute-2 sudo[111109]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:45.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:45 compute-2 sudo[111235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybhvidtftsujocffmokyrsdnanyksnes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003205.2692983-1624-255031006266652/AnsiballZ_copy.py'
Oct 09 09:46:45 compute-2 sudo[111235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:45 compute-2 ceph-mon[5983]: pgmap v393: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:46:46 compute-2 python3.9[111237]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003205.2692983-1624-255031006266652/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:46 compute-2 sudo[111235]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:46 compute-2 sudo[111388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxqeupdufddkbedtjdyfnfiqzzaadzib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003206.1329062-1624-28827873194475/AnsiballZ_stat.py'
Oct 09 09:46:46 compute-2 sudo[111388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:46 compute-2 python3.9[111390]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:46 compute-2 sudo[111388]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:46.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:46 compute-2 sudo[111513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkrkrdyxcfftfclpzvdtnyepemtkrrsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003206.1329062-1624-28827873194475/AnsiballZ_copy.py'
Oct 09 09:46:46 compute-2 sudo[111513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:46 compute-2 python3.9[111515]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003206.1329062-1624-28827873194475/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:46 compute-2 sudo[111513]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:47 compute-2 sudo[111665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anepppbaijjujzdxbnfwytzieehenhuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003207.016855-1624-110603227701467/AnsiballZ_stat.py'
Oct 09 09:46:47 compute-2 sudo[111665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:47 compute-2 python3.9[111667]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:47 compute-2 sudo[111665]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:47 compute-2 sudo[111791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qsyjesodxeoeefardqklklhwdonnvpqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003207.016855-1624-110603227701467/AnsiballZ_copy.py'
Oct 09 09:46:47 compute-2 sudo[111791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:47.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:47 compute-2 python3.9[111793]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003207.016855-1624-110603227701467/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:47 compute-2 sudo[111791]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:47 compute-2 ceph-mon[5983]: pgmap v394: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:46:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:46:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:46:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:46:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:46:48 compute-2 sudo[111943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqtbgleaydvutbupujovjoedwetfobtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003207.8874934-1624-98877234257813/AnsiballZ_stat.py'
Oct 09 09:46:48 compute-2 sudo[111943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:48 compute-2 python3.9[111945]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:48 compute-2 sudo[111943]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:48 compute-2 sudo[112069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkhsylohijishgknqokzzlrrrrzjsnoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003207.8874934-1624-98877234257813/AnsiballZ_copy.py'
Oct 09 09:46:48 compute-2 sudo[112069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:48.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:48 compute-2 python3.9[112071]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003207.8874934-1624-98877234257813/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:48 compute-2 sudo[112069]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:48 compute-2 sudo[112221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynbbpqewstamrjrlhounivyitmsejfan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003208.7556229-1624-85334547532276/AnsiballZ_stat.py'
Oct 09 09:46:48 compute-2 sudo[112221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:49 compute-2 python3.9[112223]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:49 compute-2 sudo[112221]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:49 compute-2 sudo[112344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fleiupannrnuahmhfxkvnezhomfgldzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003208.7556229-1624-85334547532276/AnsiballZ_copy.py'
Oct 09 09:46:49 compute-2 sudo[112344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:49 compute-2 python3.9[112346]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003208.7556229-1624-85334547532276/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:49 compute-2 sudo[112344]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:49.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:49 compute-2 sudo[112497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyrbsiduolispfqqadqrzfaxoqzjkslq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003209.5846796-1624-275514908684397/AnsiballZ_stat.py'
Oct 09 09:46:49 compute-2 sudo[112497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:49 compute-2 sudo[112500]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:46:49 compute-2 sudo[112500]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:46:49 compute-2 sudo[112500]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:49 compute-2 python3.9[112499]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:49 compute-2 sudo[112497]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:50 compute-2 ceph-mon[5983]: pgmap v395: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:46:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:46:50 compute-2 sudo[112648]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hanexqoxosjwepxsubbhvqgjekwyazol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003209.5846796-1624-275514908684397/AnsiballZ_copy.py'
Oct 09 09:46:50 compute-2 sudo[112648]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:50 compute-2 python3.9[112650]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1760003209.5846796-1624-275514908684397/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:50 compute-2 sudo[112648]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:50.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:50 compute-2 sudo[112800]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-chyggxmxuyfrirhaudogoqgghpsoxvpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003210.7000692-1964-36939536352184/AnsiballZ_command.py'
Oct 09 09:46:50 compute-2 sudo[112800]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:51 compute-2 python3.9[112802]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct 09 09:46:51 compute-2 sudo[112800]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:51 compute-2 sudo[112962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-capmnrqsjhbnycfwjkxlwlugfimrylur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003211.2466178-1990-211744599456119/AnsiballZ_file.py'
Oct 09 09:46:51 compute-2 sudo[112962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:51 compute-2 podman[112927]: 2025-10-09 09:46:51.460325095 +0000 UTC m=+0.055099897 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct 09 09:46:51 compute-2 python3.9[112972]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:51 compute-2 sudo[112962]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:51.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:51 compute-2 sudo[113130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gndnslmpchmrzpihmpeqbumzowbyzuom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003211.7008429-1990-163580224987456/AnsiballZ_file.py'
Oct 09 09:46:51 compute-2 sudo[113130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:52 compute-2 ceph-mon[5983]: pgmap v396: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:46:52 compute-2 python3.9[113132]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:52 compute-2 sudo[113130]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:52 compute-2 sudo[113283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgcrfveydyohvquaatxazdungkvmmchi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003212.1299615-1990-223589506045015/AnsiballZ_file.py'
Oct 09 09:46:52 compute-2 sudo[113283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:52 compute-2 python3.9[113285]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:52 compute-2 sudo[113283]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:52.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:52 compute-2 sudo[113435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdzrxjhavmlzmfuzumowpbzuapkhfoss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003212.5515208-1990-237391699851928/AnsiballZ_file.py'
Oct 09 09:46:52 compute-2 sudo[113435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:52 compute-2 python3.9[113437]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:52 compute-2 sudo[113435]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:46:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:46:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:46:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:46:53 compute-2 sudo[113587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvxjahbnskyfgbjtklovougqeltedhwo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003212.9772673-1990-202575947537498/AnsiballZ_file.py'
Oct 09 09:46:53 compute-2 sudo[113587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:53 compute-2 python3.9[113589]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:53 compute-2 sudo[113587]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:53 compute-2 sudo[113740]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btbldivljylnosgxicjlfyyegtrpohmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003213.4175718-1990-235835491469242/AnsiballZ_file.py'
Oct 09 09:46:53 compute-2 sudo[113740]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:46:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:53.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:46:53 compute-2 python3.9[113742]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:53 compute-2 sudo[113740]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:54 compute-2 ceph-mon[5983]: pgmap v397: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:46:54 compute-2 sudo[113892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltuissehnxhvqfsybfkepycmxmbcirqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003213.871218-1990-269045673458281/AnsiballZ_file.py'
Oct 09 09:46:54 compute-2 sudo[113892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:54 compute-2 python3.9[113894]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:54 compute-2 sudo[113892]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:54 compute-2 sudo[114045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxdxtzrkowoyprptjaktccyykzszcraz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003214.3533995-1990-263088264693125/AnsiballZ_file.py'
Oct 09 09:46:54 compute-2 sudo[114045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:54.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:54 compute-2 python3.9[114047]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:54 compute-2 sudo[114045]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:54 compute-2 sudo[114197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldijnbezpautgjhwkbaardisgldereup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003214.7934146-1990-29162448461249/AnsiballZ_file.py'
Oct 09 09:46:54 compute-2 sudo[114197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:55 compute-2 python3.9[114199]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:55 compute-2 sudo[114197]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:55 compute-2 sudo[114349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxsjgbsrfnndjajwdtsdknoketklglnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003215.241921-1990-184626326204650/AnsiballZ_file.py'
Oct 09 09:46:55 compute-2 sudo[114349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:55 compute-2 python3.9[114351]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:55 compute-2 sudo[114349]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:46:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:55.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:46:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:55 compute-2 systemd[1]: Starting Cleanup of Temporary Directories...
Oct 09 09:46:55 compute-2 sudo[114504]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plwbloxgflxsypiakspmkqbjqsjklljn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003215.713056-1990-101483168890992/AnsiballZ_file.py'
Oct 09 09:46:55 compute-2 sudo[114504]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:55 compute-2 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct 09 09:46:55 compute-2 systemd[1]: Finished Cleanup of Temporary Directories.
Oct 09 09:46:55 compute-2 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct 09 09:46:56 compute-2 ceph-mon[5983]: pgmap v398: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:46:56 compute-2 python3.9[114506]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:56 compute-2 sudo[114504]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:56 compute-2 sudo[114657]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozypdlhzdfywxdnyggcyeqhvkzkovhip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003216.176562-1990-253019360401966/AnsiballZ_file.py'
Oct 09 09:46:56 compute-2 sudo[114657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:56 compute-2 python3.9[114659]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:56 compute-2 sudo[114657]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:56.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:56 compute-2 sudo[114809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdtlmalyndepcrsqshkkdnegnevluewt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003216.6444447-1990-35347431536906/AnsiballZ_file.py'
Oct 09 09:46:56 compute-2 sudo[114809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:56 compute-2 python3.9[114811]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:57 compute-2 sudo[114809]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:57 compute-2 sudo[114961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfsfwmfsglxqensxipjrfogiupvxzgba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003217.094588-1990-164483851814698/AnsiballZ_file.py'
Oct 09 09:46:57 compute-2 sudo[114961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:57 compute-2 python3.9[114963]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:57 compute-2 sudo[114961]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:46:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:57.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:46:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:57 compute-2 sudo[115114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwbueecmzrlpzbfwcyfdyfvmwisxvcmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003217.6217384-2287-187263854618682/AnsiballZ_stat.py'
Oct 09 09:46:57 compute-2 sudo[115114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:57 compute-2 python3.9[115116]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:57 compute-2 sudo[115114]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:46:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:46:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:46:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:46:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:46:58 compute-2 ceph-mon[5983]: pgmap v399: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:46:58 compute-2 sudo[115238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymxfaadqjnflbxetdopwxoqpllrtqilc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003217.6217384-2287-187263854618682/AnsiballZ_copy.py'
Oct 09 09:46:58 compute-2 sudo[115238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:58 compute-2 python3.9[115240]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003217.6217384-2287-187263854618682/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:58 compute-2 sudo[115238]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:46:58.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:58 compute-2 sudo[115390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpdnwsqrbhplhtrlzzpclejfebdwtbic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003218.4530416-2287-98275886707179/AnsiballZ_stat.py'
Oct 09 09:46:58 compute-2 sudo[115390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:58 compute-2 python3.9[115392]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:58 compute-2 sudo[115390]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:59 compute-2 sudo[115513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nixqrjdyvzhlqedyzdtraiomgrqjhvfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003218.4530416-2287-98275886707179/AnsiballZ_copy.py'
Oct 09 09:46:59 compute-2 sudo[115513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:59 compute-2 python3.9[115515]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003218.4530416-2287-98275886707179/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:46:59 compute-2 sudo[115513]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:46:59 compute-2 sudo[115665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iawtqhttlqfadxhuqiovevvwjuirrpvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003219.295015-2287-71118806312740/AnsiballZ_stat.py'
Oct 09 09:46:59 compute-2 sudo[115665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:46:59 compute-2 python3.9[115668]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:46:59 compute-2 sudo[115665]: pam_unix(sudo:session): session closed for user root
Oct 09 09:46:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:46:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:46:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:46:59.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:46:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:46:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:46:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:46:59 compute-2 sudo[115789]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsvfwapcoktgvmsitkjvmmipduvkuquy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003219.295015-2287-71118806312740/AnsiballZ_copy.py'
Oct 09 09:46:59 compute-2 sudo[115789]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:00 compute-2 ceph-mon[5983]: pgmap v400: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:00 compute-2 python3.9[115791]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003219.295015-2287-71118806312740/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:00 compute-2 sudo[115789]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:00 compute-2 sudo[115942]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtdkoxvoesgogzjudfgqewxkjpokntcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003220.2104194-2287-20951953278809/AnsiballZ_stat.py'
Oct 09 09:47:00 compute-2 sudo[115942]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:00 compute-2 python3.9[115944]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:00 compute-2 sudo[115942]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:00.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:00 compute-2 sudo[116065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmvwilpmqluqynxdgsbqavkkfzzybcgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003220.2104194-2287-20951953278809/AnsiballZ_copy.py'
Oct 09 09:47:00 compute-2 sudo[116065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:00 compute-2 python3.9[116067]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003220.2104194-2287-20951953278809/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:00 compute-2 sudo[116065]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:01 compute-2 sudo[116217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bkbxmtmdttzazhcdbflomghdeyehwveg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003221.0768297-2287-178335115248352/AnsiballZ_stat.py'
Oct 09 09:47:01 compute-2 sudo[116217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:01 compute-2 python3.9[116219]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:01 compute-2 sudo[116217]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:01 compute-2 sudo[116341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hyxcfhkofqapgdpurnwrhtuldclqescw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003221.0768297-2287-178335115248352/AnsiballZ_copy.py'
Oct 09 09:47:01 compute-2 sudo[116341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:01.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:01 compute-2 python3.9[116343]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003221.0768297-2287-178335115248352/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:01 compute-2 sudo[116341]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:02 compute-2 ceph-mon[5983]: pgmap v401: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:02 compute-2 sudo[116493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wokmxwejuqumdjrxjtlyjfpjzojpvoxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003221.908661-2287-264805387401850/AnsiballZ_stat.py'
Oct 09 09:47:02 compute-2 sudo[116493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:02 compute-2 python3.9[116495]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:02 compute-2 sudo[116493]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:02 compute-2 sudo[116617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqdgnjmtqzpyuccgvfjjebaumfjlpdeq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003221.908661-2287-264805387401850/AnsiballZ_copy.py'
Oct 09 09:47:02 compute-2 sudo[116617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:02.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:02 compute-2 python3.9[116619]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003221.908661-2287-264805387401850/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:02 compute-2 sudo[116617]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:02 compute-2 sudo[116769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwlmidkimcophtvblbdayozjcatjwktk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003222.7714226-2287-124579632906047/AnsiballZ_stat.py'
Oct 09 09:47:02 compute-2 sudo[116769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:47:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:47:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:47:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:47:03 compute-2 python3.9[116771]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:03 compute-2 sudo[116769]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:03 compute-2 sudo[116892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzhhwlyyfvycgdcnhzzacmgynwhjjrwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003222.7714226-2287-124579632906047/AnsiballZ_copy.py'
Oct 09 09:47:03 compute-2 sudo[116892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:03 compute-2 python3.9[116894]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003222.7714226-2287-124579632906047/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:03 compute-2 sudo[116892]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:47:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:03.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:47:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:03 compute-2 sudo[117045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmqjjahhjrgvcpdchajkzwnbdjdjxuhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003223.6110396-2287-108395645424791/AnsiballZ_stat.py'
Oct 09 09:47:03 compute-2 sudo[117045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:03 compute-2 python3.9[117047]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:03 compute-2 sudo[117045]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:04 compute-2 ceph-mon[5983]: pgmap v402: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:47:04 compute-2 sudo[117169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcutlfcpbtvnvnjduqdvxwllcraqvrty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003223.6110396-2287-108395645424791/AnsiballZ_copy.py'
Oct 09 09:47:04 compute-2 sudo[117169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:04 compute-2 python3.9[117171]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003223.6110396-2287-108395645424791/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:04 compute-2 sudo[117169]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:04.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:04 compute-2 sudo[117321]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbrzltuhkwvfhokbjnqsencsvdfibvsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003224.4478147-2287-198516489822151/AnsiballZ_stat.py'
Oct 09 09:47:04 compute-2 sudo[117321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:04 compute-2 python3.9[117323]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:04 compute-2 sudo[117321]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:47:05 compute-2 sudo[117444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwmqkypfwsngmkwsnfuozdcbddeeuvwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003224.4478147-2287-198516489822151/AnsiballZ_copy.py'
Oct 09 09:47:05 compute-2 sudo[117444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:05 compute-2 python3.9[117446]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003224.4478147-2287-198516489822151/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:05 compute-2 sudo[117444]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:05 compute-2 sudo[117597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oraztligyycillyabdinmctynopiisbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003225.3746257-2287-250846801359665/AnsiballZ_stat.py'
Oct 09 09:47:05 compute-2 sudo[117597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:05 compute-2 python3.9[117599]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:05 compute-2 sudo[117597]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:05.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:06 compute-2 sudo[117720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcgnqrctnysaueefsytmwcratrnxlpcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003225.3746257-2287-250846801359665/AnsiballZ_copy.py'
Oct 09 09:47:06 compute-2 sudo[117720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:06 compute-2 ceph-mon[5983]: pgmap v403: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:06 compute-2 python3.9[117722]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003225.3746257-2287-250846801359665/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:06 compute-2 sudo[117720]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:06 compute-2 sudo[117873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sempmgjdhmcotpooxirfnldhxteokchl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003226.2927415-2287-212053636094214/AnsiballZ_stat.py'
Oct 09 09:47:06 compute-2 sudo[117873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:06.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:06 compute-2 python3.9[117875]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:06 compute-2 sudo[117873]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:06 compute-2 sudo[117996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmhmgvbppubhmphknsnhyxylzrpcylll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003226.2927415-2287-212053636094214/AnsiballZ_copy.py'
Oct 09 09:47:06 compute-2 sudo[117996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:07 compute-2 python3.9[117998]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003226.2927415-2287-212053636094214/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:07 compute-2 sudo[117996]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:07 compute-2 sudo[118148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvlgipisdqwrzessmrrzfirslhvehcvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003227.248331-2287-91375763134553/AnsiballZ_stat.py'
Oct 09 09:47:07 compute-2 sudo[118148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:07 compute-2 python3.9[118150]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:07 compute-2 sudo[118148]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:07.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:07 compute-2 sudo[118272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbnfrmzbaplrcjacusrrymdvosipnnpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003227.248331-2287-91375763134553/AnsiballZ_copy.py'
Oct 09 09:47:07 compute-2 sudo[118272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:47:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:47:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:47:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:47:08 compute-2 python3.9[118274]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003227.248331-2287-91375763134553/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:08 compute-2 sudo[118272]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:08 compute-2 ceph-mon[5983]: pgmap v404: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:47:08 compute-2 sudo[118425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gancxjqrrcublavgxayktmdultzgapbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003228.1503737-2287-129227188622222/AnsiballZ_stat.py'
Oct 09 09:47:08 compute-2 sudo[118425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:08 compute-2 python3.9[118427]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:08 compute-2 sudo[118425]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:47:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:08.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:47:08 compute-2 sudo[118548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmxpbajxzfmrwootfhszkiulqhpquedn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003228.1503737-2287-129227188622222/AnsiballZ_copy.py'
Oct 09 09:47:08 compute-2 sudo[118548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:08 compute-2 python3.9[118550]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003228.1503737-2287-129227188622222/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:08 compute-2 sudo[118548]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:09 compute-2 sudo[118700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbxhuvogdhjrlakgacvfoffqynwfuzhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003228.9958992-2287-150289708376550/AnsiballZ_stat.py'
Oct 09 09:47:09 compute-2 sudo[118700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:09 compute-2 python3.9[118702]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:09 compute-2 sudo[118700]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:09 compute-2 sudo[118824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivnovlpqobmvuorggiasftzlffbjyjmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003228.9958992-2287-150289708376550/AnsiballZ_copy.py'
Oct 09 09:47:09 compute-2 sudo[118824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:09 compute-2 python3.9[118826]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003228.9958992-2287-150289708376550/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:09 compute-2 sudo[118824]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:09.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:09 compute-2 sudo[118851]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:47:09 compute-2 sudo[118851]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:47:09 compute-2 sudo[118851]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:10 compute-2 ceph-mon[5983]: pgmap v405: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:47:10.266 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:47:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:47:10.267 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:47:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:47:10.267 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:47:10 compute-2 python3.9[119002]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:47:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:47:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:10.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:47:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:10 compute-2 sudo[119155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nniljtdyrcvqqdyllcwopmlhofzctbxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003230.5517585-2906-114030649925283/AnsiballZ_seboolean.py'
Oct 09 09:47:10 compute-2 sudo[119155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:10 compute-2 python3.9[119157]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct 09 09:47:11 compute-2 sudo[119155]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:11.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:12 compute-2 ceph-mon[5983]: pgmap v406: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:12 compute-2 dbus-broker-launch[792]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct 09 09:47:12 compute-2 sudo[119323]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lklgxqaxvrylginnzqmmrqnaypdagmzg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003231.9797266-2930-63188254797408/AnsiballZ_copy.py'
Oct 09 09:47:12 compute-2 sudo[119323]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:12 compute-2 podman[119287]: 2025-10-09 09:47:12.178829765 +0000 UTC m=+0.037785278 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 09 09:47:12 compute-2 python3.9[119332]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:12 compute-2 sudo[119323]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:12 compute-2 sudo[119482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmmeeycaznpcwbtnkqjwkkdaueqxizqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003232.4503472-2930-106549919073371/AnsiballZ_copy.py'
Oct 09 09:47:12 compute-2 sudo[119482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:12.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:12 compute-2 python3.9[119484]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:12 compute-2 sudo[119482]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:47:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:47:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:47:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:47:13 compute-2 sudo[119634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wspssrkqebzxixufutecapfbfshewjwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003232.880328-2930-59751015371425/AnsiballZ_copy.py'
Oct 09 09:47:13 compute-2 sudo[119634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:13 compute-2 python3.9[119636]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:13 compute-2 sudo[119634]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:13 compute-2 sudo[119787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqhnnaabeqphuacskqvexskaowswezjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003233.3579478-2930-114155651522481/AnsiballZ_copy.py'
Oct 09 09:47:13 compute-2 sudo[119787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:13 compute-2 python3.9[119789]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:13 compute-2 sudo[119787]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:47:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:13.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:47:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:13 compute-2 sudo[119939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kptevfxlbhndqwttvctlvxejsnzwisxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003233.7933896-2930-196191181735518/AnsiballZ_copy.py'
Oct 09 09:47:13 compute-2 sudo[119939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:14 compute-2 ceph-mon[5983]: pgmap v407: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:47:14 compute-2 python3.9[119941]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:14 compute-2 sudo[119939]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:14 compute-2 sudo[120092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poceslkgesygjkxgxuaauohwpchtinus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003234.2855258-3037-277755395666299/AnsiballZ_copy.py'
Oct 09 09:47:14 compute-2 sudo[120092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:47:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:14.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:47:14 compute-2 python3.9[120094]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:14 compute-2 sudo[120092]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:14 compute-2 sudo[120244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqgvezebjspymhlcxiszuzuhrvvhtjzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003234.7348058-3037-160515381334787/AnsiballZ_copy.py'
Oct 09 09:47:14 compute-2 sudo[120244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:15 compute-2 python3.9[120246]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:15 compute-2 sudo[120244]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:15 compute-2 sudo[120396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdbwgqpcnjelkiwakyxzxldbvbyfmcnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003235.2065198-3037-131180999408494/AnsiballZ_copy.py'
Oct 09 09:47:15 compute-2 sudo[120396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:15 compute-2 python3.9[120398]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:15 compute-2 sudo[120396]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:15.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:15 compute-2 sudo[120549]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqatlxgweseowolqutcbwecwhaagnmod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003235.7324255-3037-126157154476129/AnsiballZ_copy.py'
Oct 09 09:47:15 compute-2 sudo[120549]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:16 compute-2 python3.9[120551]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:16 compute-2 sudo[120549]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:16 compute-2 ceph-mon[5983]: pgmap v408: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:16.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:16 compute-2 sudo[120702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-simjbzovlquldjtfwdjjahaegihqmuir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003236.502232-3037-242979713801028/AnsiballZ_copy.py'
Oct 09 09:47:16 compute-2 sudo[120702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:16 compute-2 python3.9[120704]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:16 compute-2 sudo[120702]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:17 compute-2 sudo[120854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arbjwhhrtcgsoulczkewqmdvokmetyyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003237.0527964-3145-281274884283169/AnsiballZ_systemd.py'
Oct 09 09:47:17 compute-2 sudo[120854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:17 compute-2 python3.9[120856]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 09:47:17 compute-2 systemd[1]: Reloading.
Oct 09 09:47:17 compute-2 systemd-sysv-generator[120881]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:47:17 compute-2 systemd-rc-local-generator[120878]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:47:17 compute-2 systemd[1]: Starting libvirt logging daemon socket...
Oct 09 09:47:17 compute-2 systemd[1]: Listening on libvirt logging daemon socket.
Oct 09 09:47:17 compute-2 systemd[1]: Starting libvirt logging daemon admin socket...
Oct 09 09:47:17 compute-2 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct 09 09:47:17 compute-2 systemd[1]: Starting libvirt logging daemon...
Oct 09 09:47:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:47:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:17.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:47:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:17 compute-2 systemd[1]: Started libvirt logging daemon.
Oct 09 09:47:17 compute-2 sudo[120854]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:47:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:47:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:47:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:47:18 compute-2 ceph-mon[5983]: pgmap v409: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:47:18 compute-2 sudo[121049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmqezxitpxpnejrlbfslugbdevwclnyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003237.9684098-3145-259396964444417/AnsiballZ_systemd.py'
Oct 09 09:47:18 compute-2 sudo[121049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:18 compute-2 python3.9[121051]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 09:47:18 compute-2 systemd[1]: Reloading.
Oct 09 09:47:18 compute-2 systemd-rc-local-generator[121072]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:47:18 compute-2 systemd-sysv-generator[121078]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:47:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:18.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:18 compute-2 systemd[1]: Starting libvirt nodedev daemon socket...
Oct 09 09:47:18 compute-2 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct 09 09:47:18 compute-2 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct 09 09:47:18 compute-2 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct 09 09:47:18 compute-2 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct 09 09:47:18 compute-2 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct 09 09:47:18 compute-2 systemd[1]: Starting libvirt nodedev daemon...
Oct 09 09:47:18 compute-2 systemd[1]: Started libvirt nodedev daemon.
Oct 09 09:47:18 compute-2 sudo[121049]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:19 compute-2 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct 09 09:47:19 compute-2 sudo[121265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muwkdoieqbsnzqrtjxhxvhurpzivlvlz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003238.8560834-3145-25977361172084/AnsiballZ_systemd.py'
Oct 09 09:47:19 compute-2 sudo[121265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:19 compute-2 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct 09 09:47:19 compute-2 python3.9[121267]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 09:47:19 compute-2 systemd[1]: Reloading.
Oct 09 09:47:19 compute-2 systemd-sysv-generator[121294]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:47:19 compute-2 systemd-rc-local-generator[121287]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:47:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:19 compute-2 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct 09 09:47:19 compute-2 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct 09 09:47:19 compute-2 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct 09 09:47:19 compute-2 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct 09 09:47:19 compute-2 systemd[1]: Starting libvirt proxy daemon...
Oct 09 09:47:19 compute-2 systemd[1]: Started libvirt proxy daemon.
Oct 09 09:47:19 compute-2 sudo[121265]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:19 compute-2 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct 09 09:47:19 compute-2 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct 09 09:47:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:19.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:19 compute-2 sudo[121484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtfjwlyljhjaduahqevfjvtxpozprufx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003239.7134733-3145-86905097378388/AnsiballZ_systemd.py'
Oct 09 09:47:19 compute-2 sudo[121484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:20 compute-2 ceph-mon[5983]: pgmap v410: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:47:20 compute-2 python3.9[121486]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 09:47:20 compute-2 systemd[1]: Reloading.
Oct 09 09:47:20 compute-2 systemd-rc-local-generator[121508]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:47:20 compute-2 systemd-sysv-generator[121515]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:47:20 compute-2 setroubleshoot[121241]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 0eaec97f-aa6e-4607-a718-37c25c0f061f
Oct 09 09:47:20 compute-2 setroubleshoot[121241]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 09 09:47:20 compute-2 setroubleshoot[121241]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 0eaec97f-aa6e-4607-a718-37c25c0f061f
Oct 09 09:47:20 compute-2 setroubleshoot[121241]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Oct 09 09:47:20 compute-2 systemd[1]: Listening on libvirt locking daemon socket.
Oct 09 09:47:20 compute-2 systemd[1]: Starting libvirt QEMU daemon socket...
Oct 09 09:47:20 compute-2 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct 09 09:47:20 compute-2 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct 09 09:47:20 compute-2 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct 09 09:47:20 compute-2 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct 09 09:47:20 compute-2 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct 09 09:47:20 compute-2 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct 09 09:47:20 compute-2 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct 09 09:47:20 compute-2 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct 09 09:47:20 compute-2 systemd[1]: Starting libvirt QEMU daemon...
Oct 09 09:47:20 compute-2 systemd[1]: Started libvirt QEMU daemon.
Oct 09 09:47:20 compute-2 sudo[121484]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:20.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:20 compute-2 sudo[121698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqovahcayckcoxdurevpxhofhmljyics ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003240.615646-3145-142594761832047/AnsiballZ_systemd.py'
Oct 09 09:47:20 compute-2 sudo[121698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:21 compute-2 python3.9[121700]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 09:47:21 compute-2 systemd[1]: Reloading.
Oct 09 09:47:21 compute-2 systemd-rc-local-generator[121723]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:47:21 compute-2 systemd-sysv-generator[121726]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:47:21 compute-2 systemd[1]: Starting libvirt secret daemon socket...
Oct 09 09:47:21 compute-2 systemd[1]: Listening on libvirt secret daemon socket.
Oct 09 09:47:21 compute-2 systemd[1]: Starting libvirt secret daemon admin socket...
Oct 09 09:47:21 compute-2 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct 09 09:47:21 compute-2 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct 09 09:47:21 compute-2 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct 09 09:47:21 compute-2 systemd[1]: Starting libvirt secret daemon...
Oct 09 09:47:21 compute-2 systemd[1]: Started libvirt secret daemon.
Oct 09 09:47:21 compute-2 sudo[121698]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:21.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:21 compute-2 sudo[121920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlnganisakizwodtolltycfbecmiyjib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003241.6579692-3257-131166507849785/AnsiballZ_file.py'
Oct 09 09:47:21 compute-2 sudo[121920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:21 compute-2 podman[121883]: 2025-10-09 09:47:21.874537421 +0000 UTC m=+0.056858475 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Oct 09 09:47:22 compute-2 python3.9[121928]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:22 compute-2 sudo[121920]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:22 compute-2 ceph-mon[5983]: pgmap v411: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:22 compute-2 auditd[732]: Audit daemon rotating log files
Oct 09 09:47:22 compute-2 sudo[122086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ripmxaoynoujhdbsoatqzollxkzrgdej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003242.1643004-3281-17430591125470/AnsiballZ_find.py'
Oct 09 09:47:22 compute-2 sudo[122086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:22 compute-2 python3.9[122088]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 09 09:47:22 compute-2 sudo[122086]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:22.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:22 compute-2 sudo[122238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcmmuictswdwivqsnyyraqooowiexqie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003242.6609228-3304-236643146202851/AnsiballZ_command.py'
Oct 09 09:47:22 compute-2 sudo[122238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:22 compute-2 python3.9[122240]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:47:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:47:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:47:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:47:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:47:23 compute-2 sudo[122238]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:23 compute-2 sudo[122269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:47:23 compute-2 sudo[122269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:47:23 compute-2 sudo[122269]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:23 compute-2 sudo[122294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:47:23 compute-2 sudo[122294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:47:23 compute-2 sudo[122294]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:23 compute-2 python3.9[122458]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 09 09:47:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:23.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:24 compute-2 ceph-mon[5983]: pgmap v412: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:47:24 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 09 09:47:24 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 09 09:47:24 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 09 09:47:24 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:47:24 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:47:24 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:47:24 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:47:24 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:47:24 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:47:24 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:47:24 compute-2 python3.9[122624]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:24.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:24 compute-2 python3.9[122745]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003243.971643-3361-54701178436486/.source.xml follow=False _original_basename=secret.xml.j2 checksum=c150843fcb80d0d0a9968a12abeb036b918e43ed backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:25 compute-2 sudo[122895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxlnqjzpcvzogfautubaipitcyncjefa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003244.9503365-3406-94339106693681/AnsiballZ_command.py'
Oct 09 09:47:25 compute-2 sudo[122895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:25 compute-2 python3.9[122897]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 286f8bf0-da72-5823-9a4e-ac4457d9e609
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:47:25 compute-2 polkitd[1121]: Registered Authentication Agent for unix-process:122899:92979 (system bus name :1.1290 [/usr/bin/pkttyagent --process 122899 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 09 09:47:25 compute-2 polkitd[1121]: Unregistered Authentication Agent for unix-process:122899:92979 (system bus name :1.1290, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 09 09:47:25 compute-2 polkitd[1121]: Registered Authentication Agent for unix-process:122898:92978 (system bus name :1.1291 [/usr/bin/pkttyagent --process 122898 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 09 09:47:25 compute-2 polkitd[1121]: Unregistered Authentication Agent for unix-process:122898:92978 (system bus name :1.1291, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 09 09:47:25 compute-2 sudo[122895]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:25.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:25 compute-2 python3.9[123060]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:26 compute-2 ceph-mon[5983]: pgmap v413: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:26 compute-2 sudo[123211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhzfxexdkhycsmpcntfptdevyhaajdsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003246.103367-3454-220308763169643/AnsiballZ_command.py'
Oct 09 09:47:26 compute-2 sudo[123211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:26 compute-2 sudo[123211]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:26.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:26 compute-2 sudo[123364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icvpkyqibdrisawdsczaxazambiaqmjt ; FSID=286f8bf0-da72-5823-9a4e-ac4457d9e609 KEY=AQBWgedoAAAAABAA+vk8nE5nieplThBL84fakw== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003246.6685693-3478-180219522381126/AnsiballZ_command.py'
Oct 09 09:47:26 compute-2 sudo[123364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:26 compute-2 sudo[123367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:47:26 compute-2 sudo[123367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:47:26 compute-2 sudo[123367]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:27 compute-2 polkitd[1121]: Registered Authentication Agent for unix-process:123392:93151 (system bus name :1.1295 [/usr/bin/pkttyagent --process 123392 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Oct 09 09:47:27 compute-2 polkitd[1121]: Unregistered Authentication Agent for unix-process:123392:93151 (system bus name :1.1295, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Oct 09 09:47:27 compute-2 sudo[123364]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:27 compute-2 sudo[123547]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjfukvcvlpdtabyztlrcjlarovzihnwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003247.2478771-3502-235195189869365/AnsiballZ_copy.py'
Oct 09 09:47:27 compute-2 sudo[123547]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:27 compute-2 python3.9[123549]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:27 compute-2 sudo[123547]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:27.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:27 compute-2 ceph-mon[5983]: pgmap v414: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:47:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:47:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:47:27 compute-2 sudo[123700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eicbdaaaojktfakmfpucjwumomizytov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003247.7814064-3527-82530221301438/AnsiballZ_stat.py'
Oct 09 09:47:27 compute-2 sudo[123700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:47:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:47:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:47:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:47:28 compute-2 python3.9[123702]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:28 compute-2 sudo[123700]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:28.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:28 compute-2 sudo[123824]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygmuupjqysqpxghfxxjalzdykjpxkqle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003247.7814064-3527-82530221301438/AnsiballZ_copy.py'
Oct 09 09:47:28 compute-2 sudo[123824]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:28 compute-2 python3.9[123826]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003247.7814064-3527-82530221301438/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:28 compute-2 sudo[123824]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:29 compute-2 sudo[123976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyoxtxytibujojgpkarurhtvtodmvhej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003249.13487-3575-108483371883338/AnsiballZ_file.py'
Oct 09 09:47:29 compute-2 sudo[123976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:29 compute-2 python3.9[123978]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:29 compute-2 sudo[123976]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:47:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:29.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:47:29 compute-2 sudo[124129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucjpupnsxcaiqmstaosmjfmfdwqwinkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003249.6408834-3599-68161930488861/AnsiballZ_stat.py'
Oct 09 09:47:29 compute-2 sudo[124129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:29 compute-2 ceph-mon[5983]: pgmap v415: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:29 compute-2 sudo[124132]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:47:29 compute-2 sudo[124132]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:47:29 compute-2 sudo[124132]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:30 compute-2 python3.9[124131]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:30 compute-2 sudo[124129]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:30 compute-2 sudo[124233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqxybruavyrdlyfjiebbffnuusttwyba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003249.6408834-3599-68161930488861/AnsiballZ_file.py'
Oct 09 09:47:30 compute-2 sudo[124233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:30 compute-2 python3.9[124235]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:30 compute-2 sudo[124233]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:30 compute-2 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct 09 09:47:30 compute-2 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct 09 09:47:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:47:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:30.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:47:30 compute-2 sudo[124385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umccpkavydbqsbiwonunpblvlkghpzfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003250.586014-3636-197405449365400/AnsiballZ_stat.py'
Oct 09 09:47:30 compute-2 sudo[124385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:30 compute-2 python3.9[124387]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:30 compute-2 sudo[124385]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:31 compute-2 sudo[124463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdffbiqpzsodvbdppnkncpzgxoikitub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003250.586014-3636-197405449365400/AnsiballZ_file.py'
Oct 09 09:47:31 compute-2 sudo[124463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:31 compute-2 python3.9[124465]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.5mxafdoz recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:31 compute-2 sudo[124463]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:31 compute-2 sudo[124616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcblctygusrexmsooocpzhooexnziked ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003251.4107676-3670-97893449749973/AnsiballZ_stat.py'
Oct 09 09:47:31 compute-2 sudo[124616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:31 compute-2 python3.9[124618]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:31 compute-2 sudo[124616]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:31.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:31 compute-2 ceph-mon[5983]: pgmap v416: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:31 compute-2 sudo[124694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkxabdweolwvntuohqztddgdezwbjrpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003251.4107676-3670-97893449749973/AnsiballZ_file.py'
Oct 09 09:47:31 compute-2 sudo[124694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:47:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:47:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:47:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:47:32 compute-2 python3.9[124696]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:32 compute-2 sudo[124694]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:32 compute-2 sudo[124847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zreyeoipxxwojhqfazwprnifqdadovbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003252.3570518-3710-95147392722243/AnsiballZ_command.py'
Oct 09 09:47:32 compute-2 sudo[124847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:32.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:32 compute-2 python3.9[124849]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:47:32 compute-2 sudo[124847]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:33 compute-2 sudo[125000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccsswkdbqfaxrotcqxouuiphaeffcgcd ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760003252.8870468-3734-132430359317327/AnsiballZ_edpm_nftables_from_files.py'
Oct 09 09:47:33 compute-2 sudo[125000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:33 compute-2 python3[125002]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct 09 09:47:33 compute-2 sudo[125000]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:33 compute-2 sudo[125153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmxnufljhnfomfxmwrhlyfhxycgcoyun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003253.5244014-3758-260108186873816/AnsiballZ_stat.py'
Oct 09 09:47:33 compute-2 sudo[125153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:33.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:33 compute-2 ceph-mon[5983]: pgmap v417: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:47:33 compute-2 python3.9[125155]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:33 compute-2 sudo[125153]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:34 compute-2 sudo[125231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iakvyqqhrkmdbddcbsgfrtsakadvzrct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003253.5244014-3758-260108186873816/AnsiballZ_file.py'
Oct 09 09:47:34 compute-2 sudo[125231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:34 compute-2 python3.9[125233]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:34 compute-2 sudo[125231]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:34 compute-2 sudo[125384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfnzbnwuyslisinjoxrledjjqdzfhhpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003254.395853-3794-169729402363858/AnsiballZ_stat.py'
Oct 09 09:47:34 compute-2 sudo[125384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:47:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:34.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:47:34 compute-2 python3.9[125386]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:34 compute-2 sudo[125384]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:47:34 compute-2 sudo[125462]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovvjalfkmoptcdazomnuxrzhtajvwzsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003254.395853-3794-169729402363858/AnsiballZ_file.py'
Oct 09 09:47:34 compute-2 sudo[125462]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:35 compute-2 python3.9[125464]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:35 compute-2 sudo[125462]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:35 compute-2 sudo[125614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibmufqsokhwavhzkxesitjyhryleefcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003255.262672-3830-217640054812157/AnsiballZ_stat.py'
Oct 09 09:47:35 compute-2 sudo[125614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:35 compute-2 python3.9[125616]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:35 compute-2 sudo[125614]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:47:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:35.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:47:35 compute-2 sudo[125693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igozlexhdsjjqjtinjqijqtdoiblstjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003255.262672-3830-217640054812157/AnsiballZ_file.py'
Oct 09 09:47:35 compute-2 sudo[125693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:35 compute-2 ceph-mon[5983]: pgmap v418: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:35 compute-2 python3.9[125695]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:35 compute-2 sudo[125693]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:47:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:47:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:47:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:47:36 compute-2 sudo[125846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvdspqfadaxfwansvhlsobondxprsgqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003256.1361032-3866-211919096208451/AnsiballZ_stat.py'
Oct 09 09:47:36 compute-2 sudo[125846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:36 compute-2 python3.9[125848]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:36 compute-2 sudo[125846]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:47:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:36.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:47:36 compute-2 sudo[125924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqnufztlkashaqetmlpiqfnosckgxnuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003256.1361032-3866-211919096208451/AnsiballZ_file.py'
Oct 09 09:47:36 compute-2 sudo[125924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:36 compute-2 python3.9[125926]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:36 compute-2 sudo[125924]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:37 compute-2 sudo[126076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynjgrmukcwxxhpnabddljztixyburubd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003257.010631-3902-130414974550667/AnsiballZ_stat.py'
Oct 09 09:47:37 compute-2 sudo[126076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:37 compute-2 python3.9[126078]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:37 compute-2 sudo[126076]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:37 compute-2 sudo[126202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brwfxpwwydgaomyjyatmdeujamjvcpty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003257.010631-3902-130414974550667/AnsiballZ_copy.py'
Oct 09 09:47:37 compute-2 sudo[126202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:37.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:37 compute-2 python3.9[126204]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1760003257.010631-3902-130414974550667/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:37 compute-2 sudo[126202]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:37 compute-2 ceph-mon[5983]: pgmap v419: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:47:38 compute-2 sudo[126355]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfkzxohmsdwaylafxxenakcanwniutrw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003257.998978-3946-224229325977177/AnsiballZ_file.py'
Oct 09 09:47:38 compute-2 sudo[126355]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:38 compute-2 python3.9[126357]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:38 compute-2 sudo[126355]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:38.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:38 compute-2 sudo[126507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytobixjyqvjzxvzvwjefmdkciknrwbqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003258.5143805-3971-20060191396165/AnsiballZ_command.py'
Oct 09 09:47:38 compute-2 sudo[126507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:38 compute-2 python3.9[126509]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:47:38 compute-2 sudo[126507]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:39 compute-2 sudo[126662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujtrxbljvlanwxjzsfndujdmeetusyvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003259.0043616-3995-27898513044256/AnsiballZ_blockinfile.py'
Oct 09 09:47:39 compute-2 sudo[126662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:39 compute-2 python3.9[126664]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:39 compute-2 sudo[126662]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:47:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:39.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:47:39 compute-2 ceph-mon[5983]: pgmap v420: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:39 compute-2 sudo[126815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kbhqafvtbkpxkqaoajhqnkldxqgamepi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003259.7361424-4022-93443274325053/AnsiballZ_command.py'
Oct 09 09:47:39 compute-2 sudo[126815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:47:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:47:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:47:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:47:40 compute-2 python3.9[126817]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:47:40 compute-2 sudo[126815]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:40 compute-2 sudo[126969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfhictjiqiduuwdfedxymdakdhvkgrws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003260.2771764-4046-149035246175992/AnsiballZ_stat.py'
Oct 09 09:47:40 compute-2 sudo[126969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:40 compute-2 python3.9[126971]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:47:40 compute-2 sudo[126969]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:40.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:40 compute-2 sudo[127123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbvcntuuxwsiezbbtvyyqdktlsqxlonv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003260.8137836-4070-161218983990093/AnsiballZ_command.py'
Oct 09 09:47:40 compute-2 sudo[127123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:41 compute-2 python3.9[127125]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:47:41 compute-2 sudo[127123]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:41 compute-2 sudo[127279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cappqgffshbyypmatiunjdfxegqqcteb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003261.3835456-4094-36991075428222/AnsiballZ_file.py'
Oct 09 09:47:41 compute-2 sudo[127279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:41 compute-2 python3.9[127281]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:41 compute-2 sudo[127279]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:47:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:41.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:47:41 compute-2 ceph-mon[5983]: pgmap v421: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:42 compute-2 sudo[127431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udbyfpddshhnxfpitdhrzkhuckcxktbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003261.8994794-4118-191446317715674/AnsiballZ_stat.py'
Oct 09 09:47:42 compute-2 sudo[127431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:42 compute-2 python3.9[127433]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:42 compute-2 sudo[127431]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:42 compute-2 sudo[127565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgzkahjcnrjizcsizhpalldkienhdirj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003261.8994794-4118-191446317715674/AnsiballZ_copy.py'
Oct 09 09:47:42 compute-2 sudo[127565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:42 compute-2 podman[127529]: 2025-10-09 09:47:42.49154532 +0000 UTC m=+0.040853691 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 09 09:47:42 compute-2 python3.9[127573]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003261.8994794-4118-191446317715674/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:42.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:42 compute-2 sudo[127565]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:43 compute-2 sudo[127723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxzjlaqpweegocdgjdcumlynhidkjgob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003262.8906884-4163-8538300645112/AnsiballZ_stat.py'
Oct 09 09:47:43 compute-2 sudo[127723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:43 compute-2 python3.9[127725]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:43 compute-2 sudo[127723]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:43 compute-2 sudo[127846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxvwcstejpxjqmrgxvemjkdsftgygtxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003262.8906884-4163-8538300645112/AnsiballZ_copy.py'
Oct 09 09:47:43 compute-2 sudo[127846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:43 compute-2 python3.9[127848]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003262.8906884-4163-8538300645112/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:43 compute-2 sudo[127846]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:47:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:43.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:47:43 compute-2 ceph-mon[5983]: pgmap v422: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:47:44 compute-2 sudo[127999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ygjaukjcwxghibeoygihthbqhjsoanwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003263.8655121-4208-21489077032995/AnsiballZ_stat.py'
Oct 09 09:47:44 compute-2 sudo[127999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:44 compute-2 python3.9[128001]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:47:44 compute-2 sudo[127999]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:44 compute-2 sudo[128123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uotcwhhzddeybtumznrzvcfkddijdhft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003263.8655121-4208-21489077032995/AnsiballZ_copy.py'
Oct 09 09:47:44 compute-2 sudo[128123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:44 compute-2 python3.9[128125]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003263.8655121-4208-21489077032995/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:47:44 compute-2 sudo[128123]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:44.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:47:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:47:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:47:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:47:45 compute-2 sudo[128275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbxhdypldvggzmyrtpfccebycvnupyfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003264.843023-4253-82321478863052/AnsiballZ_systemd.py'
Oct 09 09:47:45 compute-2 sudo[128275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:45 compute-2 python3.9[128277]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:47:45 compute-2 systemd[1]: Reloading.
Oct 09 09:47:45 compute-2 systemd-rc-local-generator[128297]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:47:45 compute-2 systemd-sysv-generator[128300]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:47:45 compute-2 systemd[1]: Reached target edpm_libvirt.target.
Oct 09 09:47:45 compute-2 sudo[128275]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:45.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:45 compute-2 ceph-mon[5983]: pgmap v423: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:47:45 compute-2 sudo[128467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymvowqudwuyjpehcvnvoioyvekwkapvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003265.7975142-4277-110138754277124/AnsiballZ_systemd.py'
Oct 09 09:47:45 compute-2 sudo[128467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:46 compute-2 python3.9[128469]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct 09 09:47:46 compute-2 systemd[1]: Reloading.
Oct 09 09:47:46 compute-2 systemd-rc-local-generator[128491]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:47:46 compute-2 systemd-sysv-generator[128494]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:47:46 compute-2 systemd[1]: Reloading.
Oct 09 09:47:46 compute-2 systemd-sysv-generator[128530]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:47:46 compute-2 systemd-rc-local-generator[128527]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:47:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:46.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:46 compute-2 sudo[128467]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:47 compute-2 sshd-session[72043]: Connection closed by 192.168.122.30 port 44776
Oct 09 09:47:47 compute-2 sshd-session[72040]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:47:47 compute-2 systemd[1]: session-36.scope: Deactivated successfully.
Oct 09 09:47:47 compute-2 systemd[1]: session-36.scope: Consumed 2min 24.210s CPU time.
Oct 09 09:47:47 compute-2 systemd-logind[800]: Session 36 logged out. Waiting for processes to exit.
Oct 09 09:47:47 compute-2 systemd-logind[800]: Removed session 36.
Oct 09 09:47:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:47.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:47 compute-2 ceph-mon[5983]: pgmap v424: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:47:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:48.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:49.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:49 compute-2 ceph-mon[5983]: pgmap v425: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:47:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:47:49 compute-2 sudo[128570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:47:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:47:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:47:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:47:50 compute-2 sudo[128570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:47:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:47:50 compute-2 sudo[128570]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:50.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:51.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:51 compute-2 ceph-mon[5983]: pgmap v426: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:47:52 compute-2 podman[128598]: 2025-10-09 09:47:52.219622021 +0000 UTC m=+0.055537484 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller)
Oct 09 09:47:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:52.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:52 compute-2 sshd-session[128622]: Accepted publickey for zuul from 192.168.122.30 port 60788 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:47:52 compute-2 systemd-logind[800]: New session 37 of user zuul.
Oct 09 09:47:52 compute-2 systemd[1]: Started Session 37 of User zuul.
Oct 09 09:47:52 compute-2 sshd-session[128622]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:47:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:53 compute-2 python3.9[128775]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:47:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:53.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:53 compute-2 ceph-mon[5983]: pgmap v427: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:47:54 compute-2 sudo[128931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozamnlzcwohdpmrencrcyzvfzdoixnub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003274.1135774-64-91663746877193/AnsiballZ_file.py'
Oct 09 09:47:54 compute-2 sudo[128931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:54 compute-2 python3.9[128933]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:47:54 compute-2 sudo[128931]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:54.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:54 compute-2 sudo[129083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmvzjvrwqgcekybngeocovcuuzyshfdz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003274.723856-64-68042206932655/AnsiballZ_file.py'
Oct 09 09:47:54 compute-2 sudo[129083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:47:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:47:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:47:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:47:55 compute-2 python3.9[129085]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:47:55 compute-2 sudo[129083]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:55 compute-2 sudo[129235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-degunohtbnlfyyavordfzxpvojizjfit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003275.1845133-64-194949109596835/AnsiballZ_file.py'
Oct 09 09:47:55 compute-2 sudo[129235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:55 compute-2 python3.9[129237]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:47:55 compute-2 sudo[129235]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:47:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:55.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:47:55 compute-2 sudo[129388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeymrmutofqrpjtkghtaiajuuewvakjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003275.6524045-64-210760281823506/AnsiballZ_file.py'
Oct 09 09:47:55 compute-2 sudo[129388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:55 compute-2 ceph-mon[5983]: pgmap v428: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:47:56 compute-2 python3.9[129390]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 09 09:47:56 compute-2 sudo[129388]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:56 compute-2 sudo[129541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqstagmdxpuaavlaagszsjcjhixwsxyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003276.1118882-64-73420161124275/AnsiballZ_file.py'
Oct 09 09:47:56 compute-2 sudo[129541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:47:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:56.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:47:56 compute-2 python3.9[129543]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:47:56 compute-2 sudo[129541]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:57 compute-2 sudo[129693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybznboucepzbdomhjhakypufzkaxbwvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003277.0630813-173-86361025065711/AnsiballZ_stat.py'
Oct 09 09:47:57 compute-2 sudo[129693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:57 compute-2 python3.9[129695]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:47:57 compute-2 sudo[129693]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:57.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:57 compute-2 ceph-mon[5983]: pgmap v429: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:47:58 compute-2 sudo[129849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxjhlnocgebkbrnpymzpmmymxabxabxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003277.7483842-196-32708438948895/AnsiballZ_systemd.py'
Oct 09 09:47:58 compute-2 sudo[129849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:58 compute-2 python3.9[129851]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:47:58 compute-2 systemd[1]: Reloading.
Oct 09 09:47:58 compute-2 systemd-rc-local-generator[129873]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:47:58 compute-2 systemd-sysv-generator[129877]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:47:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:47:58.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:58 compute-2 sudo[129849]: pam_unix(sudo:session): session closed for user root
Oct 09 09:47:59 compute-2 sudo[130038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lgtwtkwcmusvanypjlcaeyrudkoqkwbr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003278.9674947-220-243825426191415/AnsiballZ_service_facts.py'
Oct 09 09:47:59 compute-2 sudo[130038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:47:59 compute-2 python3.9[130040]: ansible-ansible.builtin.service_facts Invoked
Oct 09 09:47:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:47:59 compute-2 network[130057]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 09 09:47:59 compute-2 network[130058]: 'network-scripts' will be removed from distribution in near future.
Oct 09 09:47:59 compute-2 network[130059]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 09 09:47:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:47:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:47:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:47:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:47:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:47:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:47:59.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:47:59 compute-2 ceph-mon[5983]: pgmap v430: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:48:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:48:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:47:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:48:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:48:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:48:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:00.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:48:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:01 compute-2 sudo[130038]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:01.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:01 compute-2 ceph-mon[5983]: pgmap v431: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:02 compute-2 sudo[130334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgxybazyjnlyfznvjzawjfujxhkydwpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003281.79334-244-253489926802083/AnsiballZ_systemd.py'
Oct 09 09:48:02 compute-2 sudo[130334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:02 compute-2 python3.9[130336]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:48:02 compute-2 systemd[1]: Reloading.
Oct 09 09:48:02 compute-2 systemd-rc-local-generator[130362]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:48:02 compute-2 systemd-sysv-generator[130366]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:48:02 compute-2 sudo[130334]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:48:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:02.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:48:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:03 compute-2 python3.9[130524]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:48:03 compute-2 sudo[130675]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibwwcpabkcigcxzrcvoeunfwfjvbukmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003283.3755202-295-123397731909962/AnsiballZ_podman_container.py'
Oct 09 09:48:03 compute-2 sudo[130675]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:03.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:03 compute-2 python3.9[130677]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 09 09:48:04 compute-2 ceph-mon[5983]: pgmap v432: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:48:04 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 09:48:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:48:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:04.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:48:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:48:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:48:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:48:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:48:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:48:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:05.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:06 compute-2 ceph-mon[5983]: pgmap v433: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:06 compute-2 podman[130687]: 2025-10-09 09:48:06.46156246 +0000 UTC m=+2.500339583 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 09 09:48:06 compute-2 podman[130737]: 2025-10-09 09:48:06.548513767 +0000 UTC m=+0.025379559 container create cd251fbb68706b4ed927c5f2eeeec0692acd2356af4802749228322ccf1ed1bc (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.5731] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/23)
Oct 09 09:48:06 compute-2 kernel: podman0: port 1(veth0) entered blocking state
Oct 09 09:48:06 compute-2 kernel: podman0: port 1(veth0) entered disabled state
Oct 09 09:48:06 compute-2 kernel: veth0: entered allmulticast mode
Oct 09 09:48:06 compute-2 kernel: veth0: entered promiscuous mode
Oct 09 09:48:06 compute-2 kernel: podman0: port 1(veth0) entered blocking state
Oct 09 09:48:06 compute-2 kernel: podman0: port 1(veth0) entered forwarding state
Oct 09 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.5879] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/24)
Oct 09 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.5893] device (veth0): carrier: link connected
Oct 09 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.5897] device (podman0): carrier: link connected
Oct 09 09:48:06 compute-2 systemd-udevd[130760]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:48:06 compute-2 systemd-udevd[130763]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.6094] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.6103] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct 09 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.6110] device (podman0): Activation: starting connection 'podman0' (dcd2f42b-b181-4e8d-91dd-41a4682e8154)
Oct 09 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.6112] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct 09 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.6117] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct 09 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.6120] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct 09 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.6124] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct 09 09:48:06 compute-2 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct 09 09:48:06 compute-2 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct 09 09:48:06 compute-2 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct 09 09:48:06 compute-2 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct 09 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.6322] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct 09 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.6323] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct 09 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.6328] device (podman0): Activation: successful, device activated.
Oct 09 09:48:06 compute-2 podman[130737]: 2025-10-09 09:48:06.537051517 +0000 UTC m=+0.013917309 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 09 09:48:06 compute-2 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct 09 09:48:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:06.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:06 compute-2 systemd[1]: Started libpod-conmon-cd251fbb68706b4ed927c5f2eeeec0692acd2356af4802749228322ccf1ed1bc.scope.
Oct 09 09:48:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:06 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:48:06 compute-2 podman[130737]: 2025-10-09 09:48:06.817510919 +0000 UTC m=+0.294376721 container init cd251fbb68706b4ed927c5f2eeeec0692acd2356af4802749228322ccf1ed1bc (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:48:06 compute-2 podman[130737]: 2025-10-09 09:48:06.822447155 +0000 UTC m=+0.299312947 container start cd251fbb68706b4ed927c5f2eeeec0692acd2356af4802749228322ccf1ed1bc (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:48:06 compute-2 podman[130737]: 2025-10-09 09:48:06.823806087 +0000 UTC m=+0.300671869 container attach cd251fbb68706b4ed927c5f2eeeec0692acd2356af4802749228322ccf1ed1bc (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 09 09:48:06 compute-2 iscsid_config[130890]: iqn.1994-05.com.redhat:9c86a716692
Oct 09 09:48:06 compute-2 systemd[1]: libpod-cd251fbb68706b4ed927c5f2eeeec0692acd2356af4802749228322ccf1ed1bc.scope: Deactivated successfully.
Oct 09 09:48:06 compute-2 podman[130737]: 2025-10-09 09:48:06.825787071 +0000 UTC m=+0.302652863 container died cd251fbb68706b4ed927c5f2eeeec0692acd2356af4802749228322ccf1ed1bc (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 09 09:48:06 compute-2 kernel: podman0: port 1(veth0) entered disabled state
Oct 09 09:48:06 compute-2 kernel: veth0 (unregistering): left allmulticast mode
Oct 09 09:48:06 compute-2 kernel: veth0 (unregistering): left promiscuous mode
Oct 09 09:48:06 compute-2 kernel: podman0: port 1(veth0) entered disabled state
Oct 09 09:48:06 compute-2 NetworkManager[984]: <info>  [1760003286.8554] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 09:48:07 compute-2 systemd[1]: run-netns-netns\x2d09c06a0d\x2d2916\x2d83ef\x2d18ed\x2d275b7aea03c9.mount: Deactivated successfully.
Oct 09 09:48:07 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cd251fbb68706b4ed927c5f2eeeec0692acd2356af4802749228322ccf1ed1bc-userdata-shm.mount: Deactivated successfully.
Oct 09 09:48:07 compute-2 podman[130737]: 2025-10-09 09:48:07.087679199 +0000 UTC m=+0.564544991 container remove cd251fbb68706b4ed927c5f2eeeec0692acd2356af4802749228322ccf1ed1bc (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:48:07 compute-2 systemd[1]: libpod-conmon-cd251fbb68706b4ed927c5f2eeeec0692acd2356af4802749228322ccf1ed1bc.scope: Deactivated successfully.
Oct 09 09:48:07 compute-2 python3.9[130677]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f /usr/sbin/iscsi-iname
Oct 09 09:48:07 compute-2 python3.9[130677]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: 
                                             DEPRECATED command:
                                             It is recommended to use Quadlets for running containers and pods under systemd.
                                             
                                             Please refer to podman-systemd.unit(5) for details.
                                             Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct 09 09:48:07 compute-2 sudo[130675]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:07 compute-2 systemd[1]: var-lib-containers-storage-overlay-44bb6cfe5fc5e8430efc2845fc1bb2d5b9aae246adbe036623accd9cb8fedea9-merged.mount: Deactivated successfully.
Oct 09 09:48:07 compute-2 sudo[131124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixhcfrtsiraupzssapebnhdsnkjugpsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003287.3777783-319-224945256298722/AnsiballZ_stat.py'
Oct 09 09:48:07 compute-2 sudo[131124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:07 compute-2 python3.9[131126]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:07 compute-2 sudo[131124]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:07.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:08 compute-2 ceph-mon[5983]: pgmap v434: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:48:08 compute-2 sudo[131247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuarnudxbelaadgfhukmngbyilzvpwix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003287.3777783-319-224945256298722/AnsiballZ_copy.py'
Oct 09 09:48:08 compute-2 sudo[131247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:08 compute-2 python3.9[131249]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003287.3777783-319-224945256298722/.source.iscsi _original_basename=.iuewmxpg follow=False checksum=3866c381b9a8229703da1f68474b46516e3b1cdc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:08 compute-2 sudo[131247]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:08 compute-2 sudo[131400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezhpseqrrvqtysvdoydxqllvczrlaosb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003288.4443855-365-42055995881942/AnsiballZ_file.py'
Oct 09 09:48:08 compute-2 sudo[131400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:08.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:08 compute-2 python3.9[131402]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:08 compute-2 sudo[131400]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:09 compute-2 python3.9[131552]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:48:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:09.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:48:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:48:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:48:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:48:10 compute-2 ceph-mon[5983]: pgmap v435: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:10 compute-2 sudo[131682]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:48:10 compute-2 sudo[131682]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:48:10 compute-2 sudo[131682]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:10 compute-2 sudo[131728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnkkkbmdgqrnncehfblwpjkkfzjhfyzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003289.7400203-416-142795060187627/AnsiballZ_lineinfile.py'
Oct 09 09:48:10 compute-2 sudo[131728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:10 compute-2 python3.9[131732]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:10 compute-2 sudo[131728]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:48:10.267 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:48:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:48:10.267 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:48:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:48:10.267 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:48:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:10.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:10 compute-2 sudo[131883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeuqbapwqsbtfdqocuvmzhydbwnfysba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003290.5100844-443-34756672952677/AnsiballZ_file.py'
Oct 09 09:48:10 compute-2 sudo[131883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:10 compute-2 python3.9[131885]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:10 compute-2 sudo[131883]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:11 compute-2 sudo[132035]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbcvogouipnmoaeaqoiwxslpzotwqxbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003291.0360281-467-124780357212961/AnsiballZ_stat.py'
Oct 09 09:48:11 compute-2 sudo[132035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:11 compute-2 python3.9[132037]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:11 compute-2 sudo[132035]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:11 compute-2 sudo[132114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yasmflfpilpizwjkgcgvuaitqknxhtgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003291.0360281-467-124780357212961/AnsiballZ_file.py'
Oct 09 09:48:11 compute-2 sudo[132114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:11 compute-2 python3.9[132116]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:11 compute-2 sudo[132114]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:11.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:12 compute-2 sudo[132266]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvoobmvqdnywtxisqmphequkelojkvmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003291.8255112-467-44077525810539/AnsiballZ_stat.py'
Oct 09 09:48:12 compute-2 sudo[132266]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:12 compute-2 ceph-mon[5983]: pgmap v436: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:12 compute-2 python3.9[132268]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:12 compute-2 sudo[132266]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:12 compute-2 sudo[132345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ttlrdzlitflfkkszgwuajjbsebyfszbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003291.8255112-467-44077525810539/AnsiballZ_file.py'
Oct 09 09:48:12 compute-2 sudo[132345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:12 compute-2 python3.9[132347]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:12 compute-2 sudo[132345]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:12.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:12 compute-2 sudo[132509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcbyyrgnsomkjdaxkqpmmxsdwgrbymsp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003292.707355-536-21367044857744/AnsiballZ_file.py'
Oct 09 09:48:12 compute-2 sudo[132509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:12 compute-2 podman[132471]: 2025-10-09 09:48:12.956371803 +0000 UTC m=+0.065296423 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:48:13 compute-2 python3.9[132515]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:13 compute-2 sudo[132509]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:13 compute-2 sudo[132666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tflsygaetpqnctpwsvwfvjgpnadwqefy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003293.2890956-560-50298817847456/AnsiballZ_stat.py'
Oct 09 09:48:13 compute-2 sudo[132666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:13 compute-2 python3.9[132668]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:13 compute-2 sudo[132666]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:13.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:13 compute-2 sudo[132744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hendmxpwiqlzxwraqdjkilaudfgjesgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003293.2890956-560-50298817847456/AnsiballZ_file.py'
Oct 09 09:48:13 compute-2 sudo[132744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:14 compute-2 ceph-mon[5983]: pgmap v437: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:48:14 compute-2 python3.9[132746]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:14 compute-2 sudo[132744]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:14 compute-2 sudo[132897]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssydhyzpdaevesjahldfkjnfuxxeyyrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003294.3248327-596-39500619886083/AnsiballZ_stat.py'
Oct 09 09:48:14 compute-2 sudo[132897]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:14 compute-2 python3.9[132899]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:14.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:14 compute-2 sudo[132897]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:14 compute-2 sudo[132975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptvsqwvdoyhvshxpuwgcifhwfcdtxwas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003294.3248327-596-39500619886083/AnsiballZ_file.py'
Oct 09 09:48:14 compute-2 sudo[132975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:48:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:48:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:48:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:48:15 compute-2 python3.9[132977]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:15 compute-2 sudo[132975]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:15 compute-2 sudo[133127]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdaqvhlibafqmqpnxynmjwnqoexsjoio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003295.2207303-632-165360484671749/AnsiballZ_systemd.py'
Oct 09 09:48:15 compute-2 sudo[133127]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:15 compute-2 python3.9[133129]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:48:15 compute-2 systemd[1]: Reloading.
Oct 09 09:48:15 compute-2 systemd-rc-local-generator[133158]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:48:15 compute-2 systemd-sysv-generator[133162]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:48:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:15.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:15 compute-2 sudo[133127]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:16 compute-2 ceph-mon[5983]: pgmap v438: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:16 compute-2 sudo[133318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbeqgubhrulyzrlsaiddgtjgzhcgkutz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003296.1048772-656-111191073618772/AnsiballZ_stat.py'
Oct 09 09:48:16 compute-2 sudo[133318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:16 compute-2 python3.9[133320]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:16 compute-2 sudo[133318]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:16 compute-2 sudo[133396]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjshrvkcpoidevgsxmglzzfekfdkeulu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003296.1048772-656-111191073618772/AnsiballZ_file.py'
Oct 09 09:48:16 compute-2 sudo[133396]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:16.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:16 compute-2 python3.9[133398]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:16 compute-2 sudo[133396]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:16 compute-2 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct 09 09:48:17 compute-2 sudo[133548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppamhuptvayrrrdtmlbaxyznmkboqqak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003296.9507926-692-15735597691274/AnsiballZ_stat.py'
Oct 09 09:48:17 compute-2 sudo[133548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:17 compute-2 python3.9[133550]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:17 compute-2 sudo[133548]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:17 compute-2 sudo[133626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmaihyeylyokzaqnbocnjlexgurmjrus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003296.9507926-692-15735597691274/AnsiballZ_file.py'
Oct 09 09:48:17 compute-2 sudo[133626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:17 compute-2 python3.9[133628]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:17 compute-2 sudo[133626]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:17.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:18 compute-2 sudo[133779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdaqvndumrtgbirihrcgsgasjgiblqvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003297.8116426-728-40030748819216/AnsiballZ_systemd.py'
Oct 09 09:48:18 compute-2 sudo[133779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:18 compute-2 ceph-mon[5983]: pgmap v439: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:48:18 compute-2 python3.9[133781]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:48:18 compute-2 systemd[1]: Reloading.
Oct 09 09:48:18 compute-2 systemd-rc-local-generator[133803]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:48:18 compute-2 systemd-sysv-generator[133809]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:48:18 compute-2 systemd[1]: Starting Create netns directory...
Oct 09 09:48:18 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 09 09:48:18 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 09 09:48:18 compute-2 systemd[1]: Finished Create netns directory.
Oct 09 09:48:18 compute-2 sudo[133779]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:18.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:19 compute-2 sudo[133973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiqgkpzhywoeievhlvrhffcfgagpmbnr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003299.0060894-758-235131336802486/AnsiballZ_file.py'
Oct 09 09:48:19 compute-2 sudo[133973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:19 compute-2 python3.9[133975]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:19 compute-2 sudo[133973]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:19 compute-2 sudo[134126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fblmbkskzfqdlafrwqpyvqwuotckwvav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003299.5423007-782-67805315050740/AnsiballZ_stat.py'
Oct 09 09:48:19 compute-2 sudo[134126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:19.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:19 compute-2 python3.9[134128]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:19 compute-2 sudo[134126]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:48:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:48:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:48:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:48:20 compute-2 ceph-mon[5983]: pgmap v440: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:48:20 compute-2 sudo[134250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xuhcgokvhihbrzfhddfpxanvhxdetkcp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003299.5423007-782-67805315050740/AnsiballZ_copy.py'
Oct 09 09:48:20 compute-2 sudo[134250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:20 compute-2 python3.9[134252]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003299.5423007-782-67805315050740/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:20 compute-2 sudo[134250]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:20.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:20 compute-2 sudo[134402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-itggeopfilyktjqevkjwccjztlbqygrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003300.7146132-833-102997361190055/AnsiballZ_file.py'
Oct 09 09:48:20 compute-2 sudo[134402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:21 compute-2 python3.9[134404]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:21 compute-2 sudo[134402]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:21 compute-2 sudo[134554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayulvosakamyeyzicxgbdjxcfztgocby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003301.236646-856-136711426623362/AnsiballZ_stat.py'
Oct 09 09:48:21 compute-2 sudo[134554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:21 compute-2 python3.9[134556]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:21 compute-2 sudo[134554]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:21 compute-2 sudo[134678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efqcepmermouuggrfjfipvsxblgszyxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003301.236646-856-136711426623362/AnsiballZ_copy.py'
Oct 09 09:48:21 compute-2 sudo[134678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:21.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:21 compute-2 python3.9[134680]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003301.236646-856-136711426623362/.source.json _original_basename=.sdhh2sjh follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:21 compute-2 sudo[134678]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:22 compute-2 ceph-mon[5983]: pgmap v441: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:22 compute-2 sudo[134841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muevfaycsjsigufcfoqfaizyihzxpbjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003302.1307504-902-247465352707979/AnsiballZ_file.py'
Oct 09 09:48:22 compute-2 sudo[134841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:22 compute-2 podman[134805]: 2025-10-09 09:48:22.35171856 +0000 UTC m=+0.060620750 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 09 09:48:22 compute-2 python3.9[134849]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:22 compute-2 sudo[134841]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:22.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:22 compute-2 sudo[135006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrvqeyedhfphoziqsynqcoirtxwnkysu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003302.6788518-925-170473199070603/AnsiballZ_stat.py'
Oct 09 09:48:22 compute-2 sudo[135006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:23 compute-2 sudo[135006]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:23 compute-2 sudo[135129]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eeqxvpbnxmzcbtxckiaivnohtmtcnqmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003302.6788518-925-170473199070603/AnsiballZ_copy.py'
Oct 09 09:48:23 compute-2 sudo[135129]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:23 compute-2 sudo[135129]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:23.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:24 compute-2 sudo[135282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evkdzjqfbvcbizozpqzwmamoorwevyvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003303.7499557-977-237740452549394/AnsiballZ_container_config_data.py'
Oct 09 09:48:24 compute-2 sudo[135282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:24 compute-2 ceph-mon[5983]: pgmap v442: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:48:24 compute-2 python3.9[135284]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct 09 09:48:24 compute-2 sudo[135282]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:24.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:24 compute-2 sudo[135435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztnbvgcttlcktsilmbhhknifsolzgjmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003304.4605603-1004-7416704067689/AnsiballZ_container_config_hash.py'
Oct 09 09:48:24 compute-2 sudo[135435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:24 compute-2 python3.9[135437]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 09 09:48:24 compute-2 sudo[135435]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:48:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:48:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:48:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:48:25 compute-2 sudo[135587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wlucdhbhorydfsurtaujvklxkemwabxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003305.1892836-1030-64913578749970/AnsiballZ_podman_container_info.py'
Oct 09 09:48:25 compute-2 sudo[135587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:25 compute-2 python3.9[135590]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 09 09:48:25 compute-2 sudo[135587]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:25.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:26 compute-2 ceph-mon[5983]: pgmap v443: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:26.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:27 compute-2 sudo[135687]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:48:27 compute-2 sudo[135687]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:48:27 compute-2 sudo[135687]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:27 compute-2 sudo[135735]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 09:48:27 compute-2 sudo[135735]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:48:27 compute-2 sudo[135810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkcgvprgluwinfizojkaopqexsskwcpe ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760003306.7293987-1069-194471102716827/AnsiballZ_edpm_container_manage.py'
Oct 09 09:48:27 compute-2 sudo[135810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:27 compute-2 python3[135812]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 09 09:48:27 compute-2 podman[135889]: 2025-10-09 09:48:27.538517203 +0000 UTC m=+0.041219203 container exec 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct 09 09:48:27 compute-2 podman[135906]: 2025-10-09 09:48:27.552505543 +0000 UTC m=+0.031030178 container create 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible)
Oct 09 09:48:27 compute-2 podman[135906]: 2025-10-09 09:48:27.537966435 +0000 UTC m=+0.016491070 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 09 09:48:27 compute-2 python3[135812]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct 09 09:48:27 compute-2 sudo[135810]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:27 compute-2 podman[135943]: 2025-10-09 09:48:27.671931497 +0000 UTC m=+0.044146854 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:48:27 compute-2 podman[135889]: 2025-10-09 09:48:27.675694152 +0000 UTC m=+0.178396143 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct 09 09:48:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:48:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:27.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:48:27 compute-2 podman[136084]: 2025-10-09 09:48:27.932175137 +0000 UTC m=+0.037885455 container exec 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:48:27 compute-2 podman[136084]: 2025-10-09 09:48:27.943005521 +0000 UTC m=+0.048715830 container exec_died 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:48:28 compute-2 sudo[136213]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjuraxacaxvqodhgoeedtywaumxhmuxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003307.841659-1094-269607820521631/AnsiballZ_stat.py'
Oct 09 09:48:28 compute-2 sudo[136213]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:28 compute-2 ceph-mon[5983]: pgmap v444: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:48:28 compute-2 python3.9[136220]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:48:28 compute-2 podman[136261]: 2025-10-09 09:48:28.246989287 +0000 UTC m=+0.036576368 container exec 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct 09 09:48:28 compute-2 sudo[136213]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:28 compute-2 podman[136261]: 2025-10-09 09:48:28.255031943 +0000 UTC m=+0.044619025 container exec_died 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct 09 09:48:28 compute-2 podman[136339]: 2025-10-09 09:48:28.403133805 +0000 UTC m=+0.035330838 container exec a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, build-date=2023-02-22T09:23:20, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, io.openshift.tags=Ceph keepalived, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, vcs-type=git, release=1793, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc.)
Oct 09 09:48:28 compute-2 podman[136339]: 2025-10-09 09:48:28.41507816 +0000 UTC m=+0.047275192 container exec_died a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1793, distribution-scope=public, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., description=keepalived for Ceph, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Oct 09 09:48:28 compute-2 podman[136380]: 2025-10-09 09:48:28.526940892 +0000 UTC m=+0.034837398 container exec 497c7afc8fec44ce46000a7251f8bab138912e15672ce0c2da150a022a264c99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Oct 09 09:48:28 compute-2 podman[136380]: 2025-10-09 09:48:28.537056387 +0000 UTC m=+0.044952894 container exec_died 497c7afc8fec44ce46000a7251f8bab138912e15672ce0c2da150a022a264c99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct 09 09:48:28 compute-2 sudo[135735]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:28.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:28 compute-2 sudo[136511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:48:28 compute-2 sudo[136511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:48:28 compute-2 sudo[136511]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:28 compute-2 sudo[136560]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:48:28 compute-2 sudo[136560]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:48:28 compute-2 sudo[136609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjuetwnvyzprtvjcwtjmjkzukhxkfwse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003308.56324-1120-135339374711619/AnsiballZ_file.py'
Oct 09 09:48:28 compute-2 sudo[136609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:28 compute-2 python3.9[136613]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:28 compute-2 sudo[136609]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:29 compute-2 sudo[136704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khzwybqqgghzgqfcefbnpqdmvcifhdaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003308.56324-1120-135339374711619/AnsiballZ_stat.py'
Oct 09 09:48:29 compute-2 sudo[136704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:29 compute-2 sudo[136560]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:29 compute-2 python3.9[136706]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:48:29 compute-2 sudo[136704]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:29 compute-2 ceph-mon[5983]: pgmap v445: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:29 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:48:29 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:48:29 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:48:29 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:48:29 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:48:29 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:48:29 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:48:29 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:48:29 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:48:29 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:48:29 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:48:29 compute-2 sudo[136868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qydntvqwxxodfqlnrtxzhkrkirtueeyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003309.3228939-1120-181493640793045/AnsiballZ_copy.py'
Oct 09 09:48:29 compute-2 sudo[136868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:29 compute-2 python3.9[136870]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760003309.3228939-1120-181493640793045/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:29 compute-2 sudo[136868]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:29.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:48:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:48:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:48:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:48:30 compute-2 sudo[136944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmgchtikicedaqodkqncwghzbrkvlibn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003309.3228939-1120-181493640793045/AnsiballZ_systemd.py'
Oct 09 09:48:30 compute-2 sudo[136944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:30 compute-2 sudo[136947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:48:30 compute-2 sudo[136947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:48:30 compute-2 sudo[136947]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:30 compute-2 python3.9[136946]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 09:48:30 compute-2 systemd[1]: Reloading.
Oct 09 09:48:30 compute-2 systemd-rc-local-generator[136993]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:48:30 compute-2 systemd-sysv-generator[136997]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:48:30 compute-2 sudo[136944]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:30 compute-2 ceph-mon[5983]: pgmap v446: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:48:30 compute-2 ceph-mon[5983]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Oct 09 09:48:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:30.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:30 compute-2 sudo[137080]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xqyakmroanrwihgqqdtfhbwsnpmoobvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003309.3228939-1120-181493640793045/AnsiballZ_systemd.py'
Oct 09 09:48:30 compute-2 sudo[137080]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:30 compute-2 python3.9[137082]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:48:31 compute-2 systemd[1]: Reloading.
Oct 09 09:48:31 compute-2 systemd-rc-local-generator[137105]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:48:31 compute-2 systemd-sysv-generator[137111]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:48:31 compute-2 systemd[1]: Starting iscsid container...
Oct 09 09:48:31 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:48:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c365f50d96c431c8e9c3f277438705dbff589d8af32316c641ba8ce4f1fb84ca/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct 09 09:48:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c365f50d96c431c8e9c3f277438705dbff589d8af32316c641ba8ce4f1fb84ca/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 09 09:48:31 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c365f50d96c431c8e9c3f277438705dbff589d8af32316c641ba8ce4f1fb84ca/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 09 09:48:31 compute-2 systemd[1]: Started /usr/bin/podman healthcheck run 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41.
Oct 09 09:48:31 compute-2 podman[137122]: 2025-10-09 09:48:31.367833764 +0000 UTC m=+0.084678890 container init 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 09 09:48:31 compute-2 iscsid[137134]: + sudo -E kolla_set_configs
Oct 09 09:48:31 compute-2 sudo[137140]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 09 09:48:31 compute-2 podman[137122]: 2025-10-09 09:48:31.391658352 +0000 UTC m=+0.108503458 container start 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:48:31 compute-2 systemd[1]: Created slice User Slice of UID 0.
Oct 09 09:48:31 compute-2 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct 09 09:48:31 compute-2 podman[137122]: iscsid
Oct 09 09:48:31 compute-2 systemd[1]: Started iscsid container.
Oct 09 09:48:31 compute-2 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct 09 09:48:31 compute-2 systemd[1]: Starting User Manager for UID 0...
Oct 09 09:48:31 compute-2 sudo[137080]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:31 compute-2 systemd[137149]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Oct 09 09:48:31 compute-2 podman[137141]: 2025-10-09 09:48:31.459493777 +0000 UTC m=+0.059123927 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 09 09:48:31 compute-2 systemd[1]: 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41-44090ffe15e06773.service: Main process exited, code=exited, status=1/FAILURE
Oct 09 09:48:31 compute-2 systemd[1]: 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41-44090ffe15e06773.service: Failed with result 'exit-code'.
Oct 09 09:48:31 compute-2 systemd[137149]: Queued start job for default target Main User Target.
Oct 09 09:48:31 compute-2 systemd[137149]: Created slice User Application Slice.
Oct 09 09:48:31 compute-2 systemd[137149]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct 09 09:48:31 compute-2 systemd[137149]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 09:48:31 compute-2 systemd[137149]: Reached target Paths.
Oct 09 09:48:31 compute-2 systemd[137149]: Reached target Timers.
Oct 09 09:48:31 compute-2 systemd[137149]: Starting D-Bus User Message Bus Socket...
Oct 09 09:48:31 compute-2 systemd[137149]: Starting Create User's Volatile Files and Directories...
Oct 09 09:48:31 compute-2 systemd[137149]: Listening on D-Bus User Message Bus Socket.
Oct 09 09:48:31 compute-2 systemd[137149]: Reached target Sockets.
Oct 09 09:48:31 compute-2 systemd[137149]: Finished Create User's Volatile Files and Directories.
Oct 09 09:48:31 compute-2 systemd[137149]: Reached target Basic System.
Oct 09 09:48:31 compute-2 systemd[137149]: Reached target Main User Target.
Oct 09 09:48:31 compute-2 systemd[137149]: Startup finished in 96ms.
Oct 09 09:48:31 compute-2 systemd[1]: Started User Manager for UID 0.
Oct 09 09:48:31 compute-2 systemd[1]: Started Session c3 of User root.
Oct 09 09:48:31 compute-2 sudo[137140]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 09 09:48:31 compute-2 iscsid[137134]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 09 09:48:31 compute-2 iscsid[137134]: INFO:__main__:Validating config file
Oct 09 09:48:31 compute-2 iscsid[137134]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 09 09:48:31 compute-2 iscsid[137134]: INFO:__main__:Writing out command to execute
Oct 09 09:48:31 compute-2 sudo[137140]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:31 compute-2 systemd[1]: session-c3.scope: Deactivated successfully.
Oct 09 09:48:31 compute-2 iscsid[137134]: ++ cat /run_command
Oct 09 09:48:31 compute-2 iscsid[137134]: + CMD='/usr/sbin/iscsid -f'
Oct 09 09:48:31 compute-2 iscsid[137134]: + ARGS=
Oct 09 09:48:31 compute-2 iscsid[137134]: + sudo kolla_copy_cacerts
Oct 09 09:48:31 compute-2 sudo[137215]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 09 09:48:31 compute-2 systemd[1]: Started Session c4 of User root.
Oct 09 09:48:31 compute-2 sudo[137215]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 09 09:48:31 compute-2 sudo[137215]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:31 compute-2 iscsid[137134]: Running command: '/usr/sbin/iscsid -f'
Oct 09 09:48:31 compute-2 iscsid[137134]: + [[ ! -n '' ]]
Oct 09 09:48:31 compute-2 iscsid[137134]: + . kolla_extend_start
Oct 09 09:48:31 compute-2 iscsid[137134]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct 09 09:48:31 compute-2 iscsid[137134]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct 09 09:48:31 compute-2 iscsid[137134]: + umask 0022
Oct 09 09:48:31 compute-2 iscsid[137134]: + exec /usr/sbin/iscsid -f
Oct 09 09:48:31 compute-2 systemd[1]: session-c4.scope: Deactivated successfully.
Oct 09 09:48:31 compute-2 kernel: Loading iSCSI transport class v2.0-870.
Oct 09 09:48:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:31.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:31 compute-2 python3.9[137337]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:48:32 compute-2 sudo[137488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcbvcgqzdjbdbjmuuhipkgrotbccbtfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003312.1660511-1232-58455681199302/AnsiballZ_file.py'
Oct 09 09:48:32 compute-2 sudo[137488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:32 compute-2 python3.9[137490]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:32 compute-2 sudo[137488]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:32 compute-2 ceph-mon[5983]: pgmap v447: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:48:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:32.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:32 compute-2 sudo[137538]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:48:32 compute-2 sudo[137538]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:48:32 compute-2 sudo[137538]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:33 compute-2 sudo[137665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-neenzudcubxwjgyzvlysoapxdzbptfve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003312.9204175-1265-158460532987501/AnsiballZ_service_facts.py'
Oct 09 09:48:33 compute-2 sudo[137665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:33 compute-2 python3.9[137667]: ansible-ansible.builtin.service_facts Invoked
Oct 09 09:48:33 compute-2 network[137684]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 09 09:48:33 compute-2 network[137685]: 'network-scripts' will be removed from distribution in near future.
Oct 09 09:48:33 compute-2 network[137686]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 09 09:48:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:48:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:48:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:48:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:33.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:48:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:34.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:34 compute-2 ceph-mon[5983]: pgmap v448: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:48:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:48:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:48:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:48:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:48:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:48:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:48:35 compute-2 sudo[137665]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:35.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:36.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:36 compute-2 ceph-mon[5983]: pgmap v449: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:48:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:37.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:37 compute-2 sudo[137964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiebewmszisphucibqenxlubjgbgnefq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003317.7068315-1295-47523774392277/AnsiballZ_file.py'
Oct 09 09:48:37 compute-2 sudo[137964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:38 compute-2 python3.9[137966]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 09 09:48:38 compute-2 sudo[137964]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:38.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:38 compute-2 sudo[138117]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zixceakfoemxcpvpdtvdvywzcrbgrxhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003318.4517481-1319-268248550451249/AnsiballZ_modprobe.py'
Oct 09 09:48:38 compute-2 sudo[138117]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:38 compute-2 ceph-mon[5983]: pgmap v450: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:48:38 compute-2 python3.9[138119]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct 09 09:48:38 compute-2 sudo[138117]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:39 compute-2 sudo[138273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nosirdzgwgsqfzbscplzmausulhehahk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003319.1033359-1343-120866255677436/AnsiballZ_stat.py'
Oct 09 09:48:39 compute-2 sudo[138273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:39 compute-2 python3.9[138275]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:39 compute-2 sudo[138273]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:39 compute-2 sudo[138397]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxaeyvftxlytlmlstehqcukxgpaudoxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003319.1033359-1343-120866255677436/AnsiballZ_copy.py'
Oct 09 09:48:39 compute-2 sudo[138397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:39.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:39 compute-2 python3.9[138399]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003319.1033359-1343-120866255677436/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:39 compute-2 sudo[138397]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:48:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:48:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:48:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:48:40 compute-2 sudo[138550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfivmdnnfrhywfwdvczdcbtbqtmpxtmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003320.279409-1391-246164385142178/AnsiballZ_lineinfile.py'
Oct 09 09:48:40 compute-2 sudo[138550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:40 compute-2 python3.9[138552]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:40 compute-2 sudo[138550]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:40.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:40 compute-2 ceph-mon[5983]: pgmap v451: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:48:41 compute-2 sudo[138702]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzvxgyaknxdbxpgtmnvhoahdnkexjhyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003320.8193836-1415-35044166149862/AnsiballZ_systemd.py'
Oct 09 09:48:41 compute-2 sudo[138702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:41 compute-2 python3.9[138704]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 09:48:41 compute-2 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 09 09:48:41 compute-2 systemd[1]: Stopped Load Kernel Modules.
Oct 09 09:48:41 compute-2 systemd[1]: Stopping Load Kernel Modules...
Oct 09 09:48:41 compute-2 systemd[1]: Starting Load Kernel Modules...
Oct 09 09:48:41 compute-2 systemd[1]: Finished Load Kernel Modules.
Oct 09 09:48:41 compute-2 sudo[138702]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:41 compute-2 systemd[1]: Stopping User Manager for UID 0...
Oct 09 09:48:41 compute-2 systemd[137149]: Activating special unit Exit the Session...
Oct 09 09:48:41 compute-2 systemd[137149]: Stopped target Main User Target.
Oct 09 09:48:41 compute-2 systemd[137149]: Stopped target Basic System.
Oct 09 09:48:41 compute-2 systemd[137149]: Stopped target Paths.
Oct 09 09:48:41 compute-2 systemd[137149]: Stopped target Sockets.
Oct 09 09:48:41 compute-2 systemd[137149]: Stopped target Timers.
Oct 09 09:48:41 compute-2 systemd[137149]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 09 09:48:41 compute-2 systemd[137149]: Closed D-Bus User Message Bus Socket.
Oct 09 09:48:41 compute-2 systemd[137149]: Stopped Create User's Volatile Files and Directories.
Oct 09 09:48:41 compute-2 systemd[137149]: Removed slice User Application Slice.
Oct 09 09:48:41 compute-2 systemd[137149]: Reached target Shutdown.
Oct 09 09:48:41 compute-2 systemd[137149]: Finished Exit the Session.
Oct 09 09:48:41 compute-2 systemd[137149]: Reached target Exit the Session.
Oct 09 09:48:41 compute-2 systemd[1]: user@0.service: Deactivated successfully.
Oct 09 09:48:41 compute-2 systemd[1]: Stopped User Manager for UID 0.
Oct 09 09:48:41 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct 09 09:48:41 compute-2 sudo[138859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxgmzldxossunuuyyjhhdqmdnnpnzxsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003321.4890046-1439-139857324710430/AnsiballZ_file.py'
Oct 09 09:48:41 compute-2 sudo[138859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:41 compute-2 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct 09 09:48:41 compute-2 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct 09 09:48:41 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct 09 09:48:41 compute-2 systemd[1]: Removed slice User Slice of UID 0.
Oct 09 09:48:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:41 compute-2 python3.9[138862]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:41 compute-2 sudo[138859]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:48:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:41.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:48:42 compute-2 sudo[139013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilqencocgokhjetqhjgcerukpykqiohj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003322.2700481-1466-116646183826344/AnsiballZ_stat.py'
Oct 09 09:48:42 compute-2 sudo[139013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:42 compute-2 python3.9[139015]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:48:42 compute-2 sudo[139013]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:42.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:42 compute-2 ceph-mon[5983]: pgmap v452: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:48:43 compute-2 sudo[139174]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpieqmdlndivpsfsicufbvwyzcqxqzdp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003322.8698888-1492-116206764231946/AnsiballZ_stat.py'
Oct 09 09:48:43 compute-2 sudo[139174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:43 compute-2 podman[139139]: 2025-10-09 09:48:43.098931186 +0000 UTC m=+0.064236939 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:48:43 compute-2 python3.9[139184]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:48:43 compute-2 sudo[139174]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:43 compute-2 sudo[139335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwtafzquwnnpibtqylsqkbhwarnpxyks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003323.4419153-1517-54096167749765/AnsiballZ_stat.py'
Oct 09 09:48:43 compute-2 sudo[139335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:43 compute-2 python3.9[139337]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:43 compute-2 sudo[139335]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:43.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:44 compute-2 sudo[139458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkggxhqylgnarcyqtpgehghsfehndnyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003323.4419153-1517-54096167749765/AnsiballZ_copy.py'
Oct 09 09:48:44 compute-2 sudo[139458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:44 compute-2 python3.9[139460]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003323.4419153-1517-54096167749765/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:44 compute-2 sudo[139458]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:44.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:44 compute-2 sudo[139611]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcpmthityfzztzmracvboowmrhuxgjfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003324.4456913-1562-42667162149704/AnsiballZ_command.py'
Oct 09 09:48:44 compute-2 sudo[139611]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:44 compute-2 ceph-mon[5983]: pgmap v453: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:44 compute-2 python3.9[139613]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:48:44 compute-2 sudo[139611]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:48:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:48:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:48:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:48:45 compute-2 sudo[139764]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhabxwaykfovaosnqrrxjtkmuqrghnfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003325.1293101-1586-98582766086520/AnsiballZ_lineinfile.py'
Oct 09 09:48:45 compute-2 sudo[139764]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:45 compute-2 python3.9[139766]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:45 compute-2 sudo[139764]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:45.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:45 compute-2 sudo[139917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyfzhsumtbvrptcpekcsxhaxvlsnclgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003325.6726406-1610-220308923809471/AnsiballZ_replace.py'
Oct 09 09:48:45 compute-2 sudo[139917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:46 compute-2 python3.9[139919]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:46 compute-2 sudo[139917]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:46 compute-2 sudo[140070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqeqobhvgsndmbegmxhndqdcikylolzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003326.352281-1634-116270464811419/AnsiballZ_replace.py'
Oct 09 09:48:46 compute-2 sudo[140070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:46 compute-2 python3.9[140072]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:46 compute-2 sudo[140070]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:46.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:46 compute-2 ceph-mon[5983]: pgmap v454: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:47 compute-2 sudo[140222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dqtqevzlebgmdxjafyohngfrfqtxanmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003326.9353633-1661-201639187717407/AnsiballZ_lineinfile.py'
Oct 09 09:48:47 compute-2 sudo[140222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:47 compute-2 python3.9[140224]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:47 compute-2 sudo[140222]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:47 compute-2 sudo[140375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdnmxgytpygcqzpsluhpbagqjwrljqgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003327.3866503-1661-101934789116366/AnsiballZ_lineinfile.py'
Oct 09 09:48:47 compute-2 sudo[140375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:47 compute-2 python3.9[140377]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:47 compute-2 sudo[140375]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:47.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:47 compute-2 sudo[140527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-davjqqsaohznwcroidtrpfdnpgsasbrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003327.8222106-1661-100782495844482/AnsiballZ_lineinfile.py'
Oct 09 09:48:48 compute-2 sudo[140527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:48 compute-2 python3.9[140529]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:48 compute-2 sudo[140527]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:48 compute-2 sudo[140680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsckvyoejnmfbfzqphnljghzqjjxmtfd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003328.278179-1661-255708883683985/AnsiballZ_lineinfile.py'
Oct 09 09:48:48 compute-2 sudo[140680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:48 compute-2 python3.9[140682]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:48 compute-2 sudo[140680]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:48.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:48 compute-2 ceph-mon[5983]: pgmap v455: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:48:49 compute-2 sudo[140832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkviokkxguchbdmsrkvdmmqdxliwaacp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003328.894261-1748-108433297053174/AnsiballZ_stat.py'
Oct 09 09:48:49 compute-2 sudo[140832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:49 compute-2 python3.9[140834]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:48:49 compute-2 sudo[140832]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:49 compute-2 sudo[140987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egradmyxaznyjwdenhmchtplptdtuqat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003329.4150205-1772-69318645389389/AnsiballZ_file.py'
Oct 09 09:48:49 compute-2 sudo[140987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:49 compute-2 python3.9[140989]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:49 compute-2 sudo[140987]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:49.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:48:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:48:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:48:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:48:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:48:50 compute-2 sudo[141066]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:48:50 compute-2 sudo[141066]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:48:50 compute-2 sudo[141066]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:50 compute-2 sudo[141165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghcltixggkpncpuxoalwuebdomhjoajl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003330.0630462-1799-40785397184536/AnsiballZ_file.py'
Oct 09 09:48:50 compute-2 sudo[141165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:50 compute-2 python3.9[141167]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:50 compute-2 sudo[141165]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:50.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:50 compute-2 sudo[141317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyouqsptpndxxxpjgcdebpxwjgqcczyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003330.6093578-1823-92797210859899/AnsiballZ_stat.py'
Oct 09 09:48:50 compute-2 sudo[141317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:50 compute-2 ceph-mon[5983]: pgmap v456: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:50 compute-2 python3.9[141319]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:50 compute-2 sudo[141317]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:51 compute-2 sudo[141395]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbbaastokrkmshvdstkcrnyttdmaejod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003330.6093578-1823-92797210859899/AnsiballZ_file.py'
Oct 09 09:48:51 compute-2 sudo[141395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:51 compute-2 python3.9[141397]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:51 compute-2 sudo[141395]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:51 compute-2 sudo[141548]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eiegcvzghtkipodxzxnqophanwckcaua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003331.4095657-1823-213130060698881/AnsiballZ_stat.py'
Oct 09 09:48:51 compute-2 sudo[141548]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:51 compute-2 python3.9[141550]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:51 compute-2 sudo[141548]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:48:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:51.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:48:51 compute-2 sudo[141626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfoxgcfdkmwdxmrigssdpbsmjettkxfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003331.4095657-1823-213130060698881/AnsiballZ_file.py'
Oct 09 09:48:51 compute-2 sudo[141626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:52 compute-2 python3.9[141628]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:52 compute-2 sudo[141626]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:52 compute-2 sudo[141793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ragaglcbggyiwwuexteaewwuvlhuizac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003332.2547996-1892-238122845901444/AnsiballZ_file.py'
Oct 09 09:48:52 compute-2 sudo[141793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:52 compute-2 podman[141753]: 2025-10-09 09:48:52.485070155 +0000 UTC m=+0.055552182 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:48:52 compute-2 python3.9[141799]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:52 compute-2 sudo[141793]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:52.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:52 compute-2 ceph-mon[5983]: pgmap v457: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:48:52 compute-2 sudo[141954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dotriutxnzhohenegyrkokrqcmvzynww ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003332.7871766-1916-114923220215044/AnsiballZ_stat.py'
Oct 09 09:48:52 compute-2 sudo[141954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:53 compute-2 python3.9[141956]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:53 compute-2 sudo[141954]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:53 compute-2 sudo[142032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnnwunggumiszylytuneoahbrireljbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003332.7871766-1916-114923220215044/AnsiballZ_file.py'
Oct 09 09:48:53 compute-2 sudo[142032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:53 compute-2 python3.9[142034]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:53 compute-2 sudo[142032]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:48:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:53.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:48:53 compute-2 sudo[142185]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dquidzsbzuowwnzgijjnwjrmnjtspxbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003333.7199273-1952-202493496855968/AnsiballZ_stat.py'
Oct 09 09:48:53 compute-2 sudo[142185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:54 compute-2 python3.9[142187]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:54 compute-2 sudo[142185]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:54 compute-2 sudo[142264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbypopdqikzscikxxwfqyqylljbefwps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003333.7199273-1952-202493496855968/AnsiballZ_file.py'
Oct 09 09:48:54 compute-2 sudo[142264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:54 compute-2 python3.9[142266]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:54 compute-2 sudo[142264]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:54.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:54 compute-2 sudo[142416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyikyhekopbxtsnhtflncqiapplkztek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003334.5655422-1988-192626875258612/AnsiballZ_systemd.py'
Oct 09 09:48:54 compute-2 sudo[142416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:54 compute-2 ceph-mon[5983]: pgmap v458: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:48:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:48:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:48:55 compute-2 python3.9[142418]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:48:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:48:55 compute-2 systemd[1]: Reloading.
Oct 09 09:48:55 compute-2 systemd-rc-local-generator[142440]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:48:55 compute-2 systemd-sysv-generator[142443]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:48:55 compute-2 sudo[142416]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:55 compute-2 sudo[142607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkcoqbpjbvuaslkitphejjuxorltdthx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003335.4729831-2012-210438021823145/AnsiballZ_stat.py'
Oct 09 09:48:55 compute-2 sudo[142607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:55 compute-2 python3.9[142609]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:55 compute-2 sudo[142607]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:48:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:55.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:48:55 compute-2 sudo[142685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdddqprjtjouuohbwafiptymmmiaabwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003335.4729831-2012-210438021823145/AnsiballZ_file.py'
Oct 09 09:48:55 compute-2 sudo[142685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:56 compute-2 python3.9[142687]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:56 compute-2 sudo[142685]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:56 compute-2 sudo[142838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqbakegsqigwsppxindovinybfyklycq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003336.3479593-2048-159207972574194/AnsiballZ_stat.py'
Oct 09 09:48:56 compute-2 sudo[142838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:56 compute-2 python3.9[142840]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:56 compute-2 sudo[142838]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:56.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:56 compute-2 sudo[142916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmnemwqjkxxkzxcobzlphxgfnriuucfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003336.3479593-2048-159207972574194/AnsiballZ_file.py'
Oct 09 09:48:56 compute-2 sudo[142916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:56 compute-2 ceph-mon[5983]: pgmap v459: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:48:57 compute-2 python3.9[142918]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:48:57 compute-2 sudo[142916]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:57 compute-2 sudo[143068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssuapaibqcdiphencbzftwvnpbjghxoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003337.215606-2084-198195248854097/AnsiballZ_systemd.py'
Oct 09 09:48:57 compute-2 sudo[143068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:57 compute-2 python3.9[143070]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:48:57 compute-2 systemd[1]: Reloading.
Oct 09 09:48:57 compute-2 systemd-sysv-generator[143096]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:48:57 compute-2 systemd-rc-local-generator[143092]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:48:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:57.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:57 compute-2 systemd[1]: Starting Create netns directory...
Oct 09 09:48:57 compute-2 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct 09 09:48:57 compute-2 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct 09 09:48:57 compute-2 systemd[1]: Finished Create netns directory.
Oct 09 09:48:57 compute-2 sudo[143068]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:58 compute-2 sudo[143263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkalcjngoeixfugaacegnmbtbtgpvgbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003338.3569431-2114-103985156066472/AnsiballZ_file.py'
Oct 09 09:48:58 compute-2 sudo[143263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:58 compute-2 python3.9[143265]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:58 compute-2 sudo[143263]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:48:58.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:48:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:58 compute-2 ceph-mon[5983]: pgmap v460: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:48:59 compute-2 sudo[143415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djxtdyadcrsxevihvkbjtbloqbpcaoup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003338.89317-2138-76611640407289/AnsiballZ_stat.py'
Oct 09 09:48:59 compute-2 sudo[143415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:59 compute-2 python3.9[143417]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:48:59 compute-2 sudo[143415]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:59 compute-2 sudo[143538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rstwmtsynbllnknqvbjfwjysilluctyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003338.89317-2138-76611640407289/AnsiballZ_copy.py'
Oct 09 09:48:59 compute-2 sudo[143538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:48:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:48:59 compute-2 python3.9[143540]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003338.89317-2138-76611640407289/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:48:59 compute-2 sudo[143538]: pam_unix(sudo:session): session closed for user root
Oct 09 09:48:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:48:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:48:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:48:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:48:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:48:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:48:59.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:49:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:49:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:49:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:48:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:49:00 compute-2 sudo[143692]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waqhpzwjdchtnhyconlidyeydrvnswxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003340.070506-2189-10992564377632/AnsiballZ_file.py'
Oct 09 09:49:00 compute-2 sudo[143692]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:00 compute-2 python3.9[143694]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:49:00 compute-2 sudo[143692]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:00.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:00 compute-2 sudo[143844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxvrikpxwtpqfqnvzxgxwzyhtgqllyqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003340.616925-2212-230479721977533/AnsiballZ_stat.py'
Oct 09 09:49:00 compute-2 sudo[143844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:00 compute-2 ceph-mon[5983]: pgmap v461: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:00 compute-2 python3.9[143846]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:49:00 compute-2 sudo[143844]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:01 compute-2 sudo[143967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amrywyfeaqzdtzvwtxbgzhjghibciznf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003340.616925-2212-230479721977533/AnsiballZ_copy.py'
Oct 09 09:49:01 compute-2 sudo[143967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:01 compute-2 python3.9[143969]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003340.616925-2212-230479721977533/.source.json _original_basename=.unvqt7g5 follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:01 compute-2 sudo[143967]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:01 compute-2 sudo[144128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zluxberfwrwwbcogtaryptckyjpggeck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003341.5907195-2258-80471732288838/AnsiballZ_file.py'
Oct 09 09:49:01 compute-2 sudo[144128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:01 compute-2 podman[144094]: 2025-10-09 09:49:01.791777299 +0000 UTC m=+0.036718135 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct 09 09:49:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:01.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:01 compute-2 python3.9[144135]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:01 compute-2 sudo[144128]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:02 compute-2 sudo[144289]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufhzpavcqzhjhqonbsumdwltklobrabh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003342.187836-2281-44903127575807/AnsiballZ_stat.py'
Oct 09 09:49:02 compute-2 sudo[144289]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:02 compute-2 sudo[144289]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:02.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:02 compute-2 sudo[144412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwludvmrnifwyyzjjxnwbplwgdsckxev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003342.187836-2281-44903127575807/AnsiballZ_copy.py'
Oct 09 09:49:02 compute-2 sudo[144412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:02 compute-2 sudo[144412]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:02 compute-2 ceph-mon[5983]: pgmap v462: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:49:03 compute-2 sudo[144565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjtngmxotelztonhrxbhrnkvchsgdase ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003343.4559886-2333-97171613420336/AnsiballZ_container_config_data.py'
Oct 09 09:49:03 compute-2 sudo[144565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:03 compute-2 python3.9[144567]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct 09 09:49:03 compute-2 sudo[144565]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:49:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:03.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:49:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:49:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:49:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:49:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:49:04 compute-2 sudo[144718]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdnvskczykeojqyqwlooqojvrtjgashm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003344.0630493-2360-209667579139543/AnsiballZ_container_config_hash.py'
Oct 09 09:49:04 compute-2 sudo[144718]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:04 compute-2 python3.9[144720]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 09 09:49:04 compute-2 sudo[144718]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:04.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:04 compute-2 sudo[144870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrvagkkospdrvmtgqvabfrmgmqaukjyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003344.7431214-2387-273924497198587/AnsiballZ_podman_container_info.py'
Oct 09 09:49:04 compute-2 sudo[144870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:04 compute-2 ceph-mon[5983]: pgmap v463: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:49:05 compute-2 python3.9[144872]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct 09 09:49:05 compute-2 sudo[144870]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:05.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:06 compute-2 sudo[145043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnwlsootxixmurjovvwwrncnsrvyaygg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760003346.2613113-2426-85886073050418/AnsiballZ_edpm_container_manage.py'
Oct 09 09:49:06 compute-2 sudo[145043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:06 compute-2 python3[145045]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct 09 09:49:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:06.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:06 compute-2 ceph-mon[5983]: pgmap v464: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:07.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:08 compute-2 podman[145056]: 2025-10-09 09:49:08.439986535 +0000 UTC m=+1.735505063 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 09 09:49:08 compute-2 podman[145104]: 2025-10-09 09:49:08.530462145 +0000 UTC m=+0.027020918 container create 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3)
Oct 09 09:49:08 compute-2 podman[145104]: 2025-10-09 09:49:08.516679363 +0000 UTC m=+0.013238137 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 09 09:49:08 compute-2 python3[145045]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct 09 09:49:08 compute-2 sudo[145043]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:49:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:08.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:49:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:08 compute-2 ceph-mon[5983]: pgmap v465: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:49:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:49:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:49:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:49:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:49:09 compute-2 sudo[145281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojypplcpwyfsczepmcijxwixhbtmdojx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003348.8442569-2449-272031494240744/AnsiballZ_stat.py'
Oct 09 09:49:09 compute-2 sudo[145281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:09 compute-2 python3.9[145283]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:49:09 compute-2 sudo[145281]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:09 compute-2 sudo[145436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsbalcgqlxpnsjfwrwtejdrnbahssomr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003349.4671404-2476-31630758240285/AnsiballZ_file.py'
Oct 09 09:49:09 compute-2 sudo[145436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:09 compute-2 python3.9[145438]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:09 compute-2 sudo[145436]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:09.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:09 compute-2 sudo[145512]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qwusediroklzrqafovulxhxdlskhrlev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003349.4671404-2476-31630758240285/AnsiballZ_stat.py'
Oct 09 09:49:09 compute-2 sudo[145512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:10 compute-2 python3.9[145514]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:49:10 compute-2 sudo[145512]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:10 compute-2 sudo[145539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:49:10 compute-2 sudo[145539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:49:10 compute-2 sudo[145539]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:49:10.268 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:49:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:49:10.268 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:49:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:49:10.268 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:49:10 compute-2 sudo[145689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhpmahlnjhiymekekohjnaswayxxlvwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003350.1851814-2476-248700953731878/AnsiballZ_copy.py'
Oct 09 09:49:10 compute-2 sudo[145689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:10 compute-2 python3.9[145691]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760003350.1851814-2476-248700953731878/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:10 compute-2 sudo[145689]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:10.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:10 compute-2 sudo[145765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yycbpshuoyytnhntdgztctkhezoldpui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003350.1851814-2476-248700953731878/AnsiballZ_systemd.py'
Oct 09 09:49:10 compute-2 sudo[145765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:10 compute-2 ceph-mon[5983]: pgmap v466: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:11 compute-2 python3.9[145767]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 09:49:11 compute-2 systemd[1]: Reloading.
Oct 09 09:49:11 compute-2 systemd-rc-local-generator[145788]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:49:11 compute-2 systemd-sysv-generator[145791]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:49:11 compute-2 sudo[145765]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:11 compute-2 sudo[145877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcpywdjmspctertuehdrqbtbsfmulwrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003350.1851814-2476-248700953731878/AnsiballZ_systemd.py'
Oct 09 09:49:11 compute-2 sudo[145877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:11 compute-2 python3.9[145879]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:49:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:11 compute-2 systemd[1]: Reloading.
Oct 09 09:49:11 compute-2 systemd-sysv-generator[145907]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:49:11 compute-2 systemd-rc-local-generator[145904]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:49:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:11.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:12 compute-2 systemd[1]: Starting multipathd container...
Oct 09 09:49:12 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:49:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5175b0302ad93b3f49137be4ecdffc3af532699aa52d442a121c6c953aa81cfb/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 09 09:49:12 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5175b0302ad93b3f49137be4ecdffc3af532699aa52d442a121c6c953aa81cfb/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 09 09:49:12 compute-2 systemd[1]: Started /usr/bin/podman healthcheck run 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc.
Oct 09 09:49:12 compute-2 podman[145919]: 2025-10-09 09:49:12.159505188 +0000 UTC m=+0.076622456 container init 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 09 09:49:12 compute-2 multipathd[145931]: + sudo -E kolla_set_configs
Oct 09 09:49:12 compute-2 sudo[145938]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 09 09:49:12 compute-2 sudo[145938]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 09 09:49:12 compute-2 sudo[145938]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 09 09:49:12 compute-2 podman[145919]: 2025-10-09 09:49:12.180552579 +0000 UTC m=+0.097669837 container start 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:49:12 compute-2 podman[145919]: multipathd
Oct 09 09:49:12 compute-2 systemd[1]: Started multipathd container.
Oct 09 09:49:12 compute-2 sudo[145877]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:12 compute-2 multipathd[145931]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 09 09:49:12 compute-2 multipathd[145931]: INFO:__main__:Validating config file
Oct 09 09:49:12 compute-2 multipathd[145931]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 09 09:49:12 compute-2 multipathd[145931]: INFO:__main__:Writing out command to execute
Oct 09 09:49:12 compute-2 sudo[145938]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:12 compute-2 multipathd[145931]: ++ cat /run_command
Oct 09 09:49:12 compute-2 multipathd[145931]: + CMD='/usr/sbin/multipathd -d'
Oct 09 09:49:12 compute-2 multipathd[145931]: + ARGS=
Oct 09 09:49:12 compute-2 multipathd[145931]: + sudo kolla_copy_cacerts
Oct 09 09:49:12 compute-2 sudo[145956]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 09 09:49:12 compute-2 sudo[145956]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 09 09:49:12 compute-2 sudo[145956]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 09 09:49:12 compute-2 sudo[145956]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:12 compute-2 multipathd[145931]: + [[ ! -n '' ]]
Oct 09 09:49:12 compute-2 multipathd[145931]: + . kolla_extend_start
Oct 09 09:49:12 compute-2 multipathd[145931]: Running command: '/usr/sbin/multipathd -d'
Oct 09 09:49:12 compute-2 multipathd[145931]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 09 09:49:12 compute-2 multipathd[145931]: + umask 0022
Oct 09 09:49:12 compute-2 multipathd[145931]: + exec /usr/sbin/multipathd -d
Oct 09 09:49:12 compute-2 podman[145939]: 2025-10-09 09:49:12.244489592 +0000 UTC m=+0.056543671 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Oct 09 09:49:12 compute-2 systemd[1]: 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc-20b662598013bb87.service: Main process exited, code=exited, status=1/FAILURE
Oct 09 09:49:12 compute-2 systemd[1]: 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc-20b662598013bb87.service: Failed with result 'exit-code'.
Oct 09 09:49:12 compute-2 multipathd[145931]: 1036.744954 | --------start up--------
Oct 09 09:49:12 compute-2 multipathd[145931]: 1036.745033 | read /etc/multipath.conf
Oct 09 09:49:12 compute-2 multipathd[145931]: 1036.748804 | path checkers start up
Oct 09 09:49:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:12.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:12 compute-2 python3.9[146119]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:49:12 compute-2 ceph-mon[5983]: pgmap v467: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:49:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:49:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:49:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:49:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:49:13 compute-2 sudo[146280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdczwfrqpazvxljifcuetwbrhgtljnws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003352.9639943-2585-201670003228203/AnsiballZ_command.py'
Oct 09 09:49:13 compute-2 sudo[146280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:13 compute-2 podman[146246]: 2025-10-09 09:49:13.159425837 +0000 UTC m=+0.038475769 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct 09 09:49:13 compute-2 python3.9[146287]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:49:13 compute-2 sudo[146280]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:13 compute-2 sudo[146449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwhzffentfamgexbhvfxucknsedhgsbk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003353.5377755-2609-220968997662091/AnsiballZ_systemd.py'
Oct 09 09:49:13 compute-2 sudo[146449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:13.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:13 compute-2 python3.9[146451]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 09:49:14 compute-2 systemd[1]: Stopping multipathd container...
Oct 09 09:49:14 compute-2 multipathd[145931]: 1038.531729 | exit (signal)
Oct 09 09:49:14 compute-2 multipathd[145931]: 1038.531767 | --------shut down-------
Oct 09 09:49:14 compute-2 systemd[1]: libpod-22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc.scope: Deactivated successfully.
Oct 09 09:49:14 compute-2 podman[146455]: 2025-10-09 09:49:14.070586483 +0000 UTC m=+0.053483170 container died 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 09 09:49:14 compute-2 systemd[1]: 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc-20b662598013bb87.timer: Deactivated successfully.
Oct 09 09:49:14 compute-2 systemd[1]: Stopped /usr/bin/podman healthcheck run 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc.
Oct 09 09:49:14 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc-userdata-shm.mount: Deactivated successfully.
Oct 09 09:49:14 compute-2 systemd[1]: var-lib-containers-storage-overlay-5175b0302ad93b3f49137be4ecdffc3af532699aa52d442a121c6c953aa81cfb-merged.mount: Deactivated successfully.
Oct 09 09:49:14 compute-2 podman[146455]: 2025-10-09 09:49:14.139579952 +0000 UTC m=+0.122476638 container cleanup 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct 09 09:49:14 compute-2 podman[146455]: multipathd
Oct 09 09:49:14 compute-2 podman[146481]: multipathd
Oct 09 09:49:14 compute-2 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct 09 09:49:14 compute-2 systemd[1]: Stopped multipathd container.
Oct 09 09:49:14 compute-2 systemd[1]: Starting multipathd container...
Oct 09 09:49:14 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:49:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5175b0302ad93b3f49137be4ecdffc3af532699aa52d442a121c6c953aa81cfb/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 09 09:49:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5175b0302ad93b3f49137be4ecdffc3af532699aa52d442a121c6c953aa81cfb/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 09 09:49:14 compute-2 systemd[1]: Started /usr/bin/podman healthcheck run 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc.
Oct 09 09:49:14 compute-2 podman[146490]: 2025-10-09 09:49:14.276273098 +0000 UTC m=+0.070530247 container init 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 09 09:49:14 compute-2 multipathd[146502]: + sudo -E kolla_set_configs
Oct 09 09:49:14 compute-2 podman[146490]: 2025-10-09 09:49:14.293894288 +0000 UTC m=+0.088151418 container start 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 09 09:49:14 compute-2 podman[146490]: multipathd
Oct 09 09:49:14 compute-2 sudo[146508]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Oct 09 09:49:14 compute-2 sudo[146508]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 09 09:49:14 compute-2 sudo[146508]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 09 09:49:14 compute-2 systemd[1]: Started multipathd container.
Oct 09 09:49:14 compute-2 sudo[146449]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:14 compute-2 multipathd[146502]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 09 09:49:14 compute-2 multipathd[146502]: INFO:__main__:Validating config file
Oct 09 09:49:14 compute-2 multipathd[146502]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 09 09:49:14 compute-2 multipathd[146502]: INFO:__main__:Writing out command to execute
Oct 09 09:49:14 compute-2 sudo[146508]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:14 compute-2 multipathd[146502]: ++ cat /run_command
Oct 09 09:49:14 compute-2 multipathd[146502]: + CMD='/usr/sbin/multipathd -d'
Oct 09 09:49:14 compute-2 multipathd[146502]: + ARGS=
Oct 09 09:49:14 compute-2 multipathd[146502]: + sudo kolla_copy_cacerts
Oct 09 09:49:14 compute-2 sudo[146533]:     root : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Oct 09 09:49:14 compute-2 sudo[146533]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Oct 09 09:49:14 compute-2 sudo[146533]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0)
Oct 09 09:49:14 compute-2 sudo[146533]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:14 compute-2 multipathd[146502]: Running command: '/usr/sbin/multipathd -d'
Oct 09 09:49:14 compute-2 multipathd[146502]: + [[ ! -n '' ]]
Oct 09 09:49:14 compute-2 multipathd[146502]: + . kolla_extend_start
Oct 09 09:49:14 compute-2 multipathd[146502]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct 09 09:49:14 compute-2 multipathd[146502]: + umask 0022
Oct 09 09:49:14 compute-2 multipathd[146502]: + exec /usr/sbin/multipathd -d
Oct 09 09:49:14 compute-2 multipathd[146502]: 1038.860461 | --------start up--------
Oct 09 09:49:14 compute-2 multipathd[146502]: 1038.860723 | read /etc/multipath.conf
Oct 09 09:49:14 compute-2 podman[146509]: 2025-10-09 09:49:14.369387754 +0000 UTC m=+0.066486492 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct 09 09:49:14 compute-2 multipathd[146502]: 1038.864244 | path checkers start up
Oct 09 09:49:14 compute-2 systemd[1]: 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc-68de4bb5e73be16b.service: Main process exited, code=exited, status=1/FAILURE
Oct 09 09:49:14 compute-2 systemd[1]: 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc-68de4bb5e73be16b.service: Failed with result 'exit-code'.
Oct 09 09:49:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:14.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:14 compute-2 sudo[146689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubrecuagjfbemcanvwyimvkpgsduhwas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003354.5787675-2633-271973054880607/AnsiballZ_file.py'
Oct 09 09:49:14 compute-2 sudo[146689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:14 compute-2 python3.9[146691]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:14 compute-2 sudo[146689]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:14 compute-2 ceph-mon[5983]: pgmap v468: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:15 compute-2 sudo[146842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxmlrimqrzyzdojolkdnmwonxttqeizv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003355.45399-2669-260425426431728/AnsiballZ_file.py'
Oct 09 09:49:15 compute-2 sudo[146842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:15 compute-2 python3.9[146844]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct 09 09:49:15 compute-2 sudo[146842]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:15.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:16 compute-2 sudo[146995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfamcrytrseiftnvilugmiypwujihufu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003356.0345805-2693-238991907349633/AnsiballZ_modprobe.py'
Oct 09 09:49:16 compute-2 sudo[146995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:16 compute-2 python3.9[146997]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct 09 09:49:16 compute-2 kernel: Key type psk registered
Oct 09 09:49:16 compute-2 sudo[146995]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:16.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:16 compute-2 sudo[147158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnwnlbletefabivglfnswkxszmpctvcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003356.6451046-2716-182697927741461/AnsiballZ_stat.py'
Oct 09 09:49:16 compute-2 sudo[147158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:17 compute-2 ceph-mon[5983]: pgmap v469: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:17 compute-2 python3.9[147160]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:49:17 compute-2 sudo[147158]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:17 compute-2 sudo[147281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibljufrxibosesvtcvrbkuhwkaubpehk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003356.6451046-2716-182697927741461/AnsiballZ_copy.py'
Oct 09 09:49:17 compute-2 sudo[147281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:17 compute-2 python3.9[147283]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1760003356.6451046-2716-182697927741461/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:17 compute-2 sudo[147281]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:17 compute-2 sudo[147434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-stmxbcphifipgczcfccwkqricqobzdml ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003357.7147903-2765-265887981274005/AnsiballZ_lineinfile.py'
Oct 09 09:49:17 compute-2 sudo[147434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:17.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:49:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:49:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:49:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:49:18 compute-2 python3.9[147436]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:18 compute-2 sudo[147434]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:18 compute-2 sudo[147587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxvdenkrfbdrfkgwfashpkbsqwtxttrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003358.2410104-2789-247292745364082/AnsiballZ_systemd.py'
Oct 09 09:49:18 compute-2 sudo[147587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:18 compute-2 python3.9[147589]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 09:49:18 compute-2 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct 09 09:49:18 compute-2 systemd[1]: Stopped Load Kernel Modules.
Oct 09 09:49:18 compute-2 systemd[1]: Stopping Load Kernel Modules...
Oct 09 09:49:18 compute-2 systemd[1]: Starting Load Kernel Modules...
Oct 09 09:49:18 compute-2 systemd[1]: Finished Load Kernel Modules.
Oct 09 09:49:18 compute-2 sudo[147587]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:18 compute-2 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct 09 09:49:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:18.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:19 compute-2 ceph-mon[5983]: pgmap v470: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:49:19 compute-2 sudo[147744]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flmlwvbejrdmvutpooddipmpkgnqhchb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003358.9815688-2813-41747397954762/AnsiballZ_setup.py'
Oct 09 09:49:19 compute-2 sudo[147744]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:19 compute-2 python3.9[147746]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct 09 09:49:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:19 compute-2 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct 09 09:49:19 compute-2 sudo[147744]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:49:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:19.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:49:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:49:20 compute-2 sudo[147830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cifbqfltqaceoagzqkaczitdrzigmism ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003358.9815688-2813-41747397954762/AnsiballZ_dnf.py'
Oct 09 09:49:20 compute-2 sudo[147830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:20 compute-2 python3.9[147832]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct 09 09:49:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:49:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:20.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:49:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:21 compute-2 ceph-mon[5983]: pgmap v471: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:21.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:49:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:22.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:49:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:49:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:49:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:49:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:49:23 compute-2 ceph-mon[5983]: pgmap v472: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:49:23 compute-2 podman[147837]: 2025-10-09 09:49:23.225360822 +0000 UTC m=+0.059998517 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct 09 09:49:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:49:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:23.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:49:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:24.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:25 compute-2 ceph-mon[5983]: pgmap v473: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:25 compute-2 systemd[1]: Reloading.
Oct 09 09:49:25 compute-2 systemd-rc-local-generator[147886]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:49:25 compute-2 systemd-sysv-generator[147889]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:49:25 compute-2 systemd[1]: Reloading.
Oct 09 09:49:25 compute-2 systemd-sysv-generator[147925]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:49:25 compute-2 systemd-rc-local-generator[147922]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:49:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:25.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:25 compute-2 systemd-logind[800]: Watching system buttons on /dev/input/event0 (Power Button)
Oct 09 09:49:25 compute-2 systemd-logind[800]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct 09 09:49:26 compute-2 lvm[147969]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 09:49:26 compute-2 lvm[147969]: VG ceph_vg0 finished
Oct 09 09:49:26 compute-2 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct 09 09:49:26 compute-2 systemd[1]: Starting man-db-cache-update.service...
Oct 09 09:49:26 compute-2 systemd[1]: Reloading.
Oct 09 09:49:26 compute-2 systemd-rc-local-generator[148012]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:49:26 compute-2 systemd-sysv-generator[148016]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:49:26 compute-2 systemd[1]: Queuing reload/restart jobs for marked units…
Oct 09 09:49:26 compute-2 sudo[147830]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:26.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:26 compute-2 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct 09 09:49:26 compute-2 systemd[1]: Finished man-db-cache-update.service.
Oct 09 09:49:26 compute-2 systemd[1]: man-db-cache-update.service: Consumed 1.021s CPU time.
Oct 09 09:49:26 compute-2 systemd[1]: run-rd3290976340a4b3698477c5ce5214fbb.service: Deactivated successfully.
Oct 09 09:49:27 compute-2 ceph-mon[5983]: pgmap v474: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:27 compute-2 systemd[1]: virtqemud.service: Deactivated successfully.
Oct 09 09:49:27 compute-2 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 09 09:49:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 09 09:49:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:27.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 09 09:49:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:49:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:49:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:49:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:49:28 compute-2 sudo[149310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqhjumoqfwoqiibpdvxtugeepavnfoue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003367.8433363-2849-124661676013024/AnsiballZ_file.py'
Oct 09 09:49:28 compute-2 sudo[149310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:28 compute-2 python3.9[149312]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:28 compute-2 sudo[149310]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:28.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:28 compute-2 python3.9[149463]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct 09 09:49:29 compute-2 ceph-mon[5983]: pgmap v475: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:49:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:29 compute-2 sudo[149617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncdivmrprclqyfvkgssxhkijcbxzputy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003369.3152728-2901-69132719884687/AnsiballZ_file.py'
Oct 09 09:49:29 compute-2 sudo[149617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:29 compute-2 python3.9[149620]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:29 compute-2 sudo[149617]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:29.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:30 compute-2 sudo[149698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:49:30 compute-2 sudo[149698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:49:30 compute-2 sudo[149698]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:30 compute-2 sudo[149796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qemcuwmnoeejdrxrigeeaprprcwoqktc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003370.1055255-2934-9855189332073/AnsiballZ_systemd_service.py'
Oct 09 09:49:30 compute-2 sudo[149796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:30.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:30 compute-2 python3.9[149798]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 09:49:30 compute-2 systemd[1]: Reloading.
Oct 09 09:49:30 compute-2 systemd-sysv-generator[149823]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:49:30 compute-2 systemd-rc-local-generator[149820]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:49:31 compute-2 ceph-mon[5983]: pgmap v476: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:31 compute-2 sudo[149796]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:31 compute-2 python3.9[149982]: ansible-ansible.builtin.service_facts Invoked
Oct 09 09:49:31 compute-2 network[150000]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct 09 09:49:31 compute-2 network[150001]: 'network-scripts' will be removed from distribution in near future.
Oct 09 09:49:31 compute-2 network[150002]: It is advised to switch to 'NetworkManager' instead for network management.
Oct 09 09:49:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:31.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:32 compute-2 podman[150009]: 2025-10-09 09:49:32.403716124 +0000 UTC m=+0.053890289 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 09 09:49:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 09 09:49:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:32.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 09 09:49:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:49:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:49:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:49:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:49:33 compute-2 sudo[150088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:49:33 compute-2 sudo[150088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:49:33 compute-2 sudo[150088]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:33 compute-2 ceph-mon[5983]: pgmap v477: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:49:33 compute-2 sudo[150117]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:49:33 compute-2 sudo[150117]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:49:33 compute-2 sudo[150117]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:33.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:49:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:49:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:49:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:49:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:49:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:49:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:49:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:34.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:35 compute-2 ceph-mon[5983]: pgmap v478: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:35 compute-2 ceph-mon[5983]: pgmap v479: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:49:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:49:35 compute-2 sudo[150377]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esqlqkiixmqwrnwyqnijbkbpkikcrgjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003375.1425292-2991-245800783279563/AnsiballZ_systemd_service.py'
Oct 09 09:49:35 compute-2 sudo[150377]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:35 compute-2 python3.9[150379]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:49:35 compute-2 sudo[150377]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:35 compute-2 sudo[150531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqclohvwmazeticecwdfduwzgoguicgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003375.7120373-2991-175231678887181/AnsiballZ_systemd_service.py'
Oct 09 09:49:35 compute-2 sudo[150531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:35.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:36 compute-2 python3.9[150533]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:49:36 compute-2 sudo[150531]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:36 compute-2 sudo[150685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjugsjvwqalonsxpmdtakfppvyfvudxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003376.265122-2991-273993284581718/AnsiballZ_systemd_service.py'
Oct 09 09:49:36 compute-2 sudo[150685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:36 compute-2 python3.9[150687]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:49:36 compute-2 sudo[150685]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:36.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:37 compute-2 sudo[150838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwhoyjvmvvmtvfhrfopviasejbcjvqgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003376.8373148-2991-108543323218990/AnsiballZ_systemd_service.py'
Oct 09 09:49:37 compute-2 sudo[150838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:37 compute-2 ceph-mon[5983]: pgmap v480: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:49:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:49:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:49:37 compute-2 sudo[150841]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:49:37 compute-2 sudo[150841]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:49:37 compute-2 sudo[150841]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:37 compute-2 python3.9[150840]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:49:37 compute-2 sudo[150838]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:37 compute-2 sudo[151017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouelfsmgpspbxyszmgaembijlmiprimo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003377.5100303-2991-116557641885781/AnsiballZ_systemd_service.py'
Oct 09 09:49:37 compute-2 sudo[151017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:37.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:37 compute-2 python3.9[151019]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:49:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:49:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:49:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:49:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:49:38 compute-2 sudo[151017]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:38 compute-2 sudo[151171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aedbxyggkjarfhvpcfyavbicxnfyemiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003378.1082172-2991-214592513174873/AnsiballZ_systemd_service.py'
Oct 09 09:49:38 compute-2 sudo[151171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:38 compute-2 python3.9[151173]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:49:38 compute-2 sudo[151171]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:38.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:38 compute-2 sudo[151324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-issckghntsbpbdczuringkhsopkwndho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003378.6953206-2991-54421243994646/AnsiballZ_systemd_service.py'
Oct 09 09:49:38 compute-2 sudo[151324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:39 compute-2 ceph-mon[5983]: pgmap v481: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:49:39 compute-2 python3.9[151326]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:49:39 compute-2 sudo[151324]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:39 compute-2 sudo[151477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlaxwcetamuqrtsdunekyaqcjlkvpnmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003379.253951-2991-42098999424917/AnsiballZ_systemd_service.py'
Oct 09 09:49:39 compute-2 sudo[151477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:39 compute-2 python3.9[151479]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:49:39 compute-2 sudo[151477]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:39.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:40 compute-2 sudo[151632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkleeoknivvrxdzmhkhrmolhaqvzkxan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003380.572178-3168-254619252322386/AnsiballZ_file.py'
Oct 09 09:49:40 compute-2 sudo[151632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 09 09:49:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:40.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 09 09:49:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:40 compute-2 python3.9[151634]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:40 compute-2 sudo[151632]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:41 compute-2 ceph-mon[5983]: pgmap v482: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:49:41 compute-2 sudo[151784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rygnbaexjroijvjeziexaiggrrrblkkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003381.0261223-3168-170181592889062/AnsiballZ_file.py'
Oct 09 09:49:41 compute-2 sudo[151784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:41 compute-2 python3.9[151786]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:41 compute-2 sudo[151784]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:41 compute-2 sudo[151937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbkvonswdhkwxnribclrtvquzvpbxhec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003381.469353-3168-116198014960126/AnsiballZ_file.py'
Oct 09 09:49:41 compute-2 sudo[151937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:41 compute-2 python3.9[151939]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:41 compute-2 sudo[151937]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:41.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:42 compute-2 sudo[152089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbdafsqwihzwudflrovuzqvnwzpgcpgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003381.9137092-3168-191719481855616/AnsiballZ_file.py'
Oct 09 09:49:42 compute-2 sudo[152089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:42 compute-2 python3.9[152091]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:42 compute-2 sudo[152089]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:42 compute-2 sudo[152242]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ajdtubveqzvkhkfijynugqvzkfskjfst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003382.362923-3168-98141157435413/AnsiballZ_file.py'
Oct 09 09:49:42 compute-2 sudo[152242]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:42 compute-2 python3.9[152244]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:42 compute-2 sudo[152242]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:42.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:49:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:49:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:49:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:49:43 compute-2 sudo[152394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvutfuvtibdnufmrrkzstrwkhrqaetxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003382.842748-3168-48102442704425/AnsiballZ_file.py'
Oct 09 09:49:43 compute-2 sudo[152394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:43 compute-2 ceph-mon[5983]: pgmap v483: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:49:43 compute-2 python3.9[152396]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:43 compute-2 sudo[152394]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:43 compute-2 sudo[152555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxxzuunirwzpttuqyhpwsvtacirmjypu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003383.3091514-3168-231216702103529/AnsiballZ_file.py'
Oct 09 09:49:43 compute-2 sudo[152555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:43 compute-2 podman[152520]: 2025-10-09 09:49:43.509494249 +0000 UTC m=+0.045130359 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 09 09:49:43 compute-2 python3.9[152565]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:43 compute-2 sudo[152555]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 09 09:49:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:43.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 09 09:49:43 compute-2 sudo[152715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twceoewlxxknudnvmdsqpwotiglknwly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003383.7835884-3168-215707401547034/AnsiballZ_file.py'
Oct 09 09:49:43 compute-2 sudo[152715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:44 compute-2 python3.9[152717]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:44 compute-2 sudo[152715]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:44 compute-2 sudo[152877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcfeeqeoponsqpwpzwhtatzbkiyxzuyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003384.3777568-3339-216682411860939/AnsiballZ_file.py'
Oct 09 09:49:44 compute-2 sudo[152877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:44 compute-2 podman[152842]: 2025-10-09 09:49:44.595562331 +0000 UTC m=+0.046644261 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:49:44 compute-2 python3.9[152886]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:44 compute-2 sudo[152877]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:44.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:45 compute-2 sudo[153036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnoxelxvtzanoiuxrffwbstzhqcdpoth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003384.8456388-3339-131233436768810/AnsiballZ_file.py'
Oct 09 09:49:45 compute-2 sudo[153036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:45 compute-2 ceph-mon[5983]: pgmap v484: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:49:45 compute-2 python3.9[153038]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:45 compute-2 sudo[153036]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:45 compute-2 sudo[153188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjyualhmwelxtccwormcnrqzvitlijbl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003385.2932162-3339-53539471855737/AnsiballZ_file.py'
Oct 09 09:49:45 compute-2 sudo[153188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:45 compute-2 python3.9[153190]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:45 compute-2 sudo[153188]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:45 compute-2 sudo[153341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gizpekrfpyyqsztimrzgottwludlqzsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003385.7329938-3339-190110357957689/AnsiballZ_file.py'
Oct 09 09:49:45 compute-2 sudo[153341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:45.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:46 compute-2 python3.9[153343]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:46 compute-2 sudo[153341]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:46 compute-2 sudo[153494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxlqxohxaoayjpaasrejtuuyjmkxnoot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003386.1758807-3339-114796381099281/AnsiballZ_file.py'
Oct 09 09:49:46 compute-2 sudo[153494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:46 compute-2 python3.9[153496]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:46 compute-2 sudo[153494]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:46.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:46 compute-2 sudo[153646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkrntwjtvozcmcrjjhkpbjuuifxewibd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003386.6479235-3339-212154597587765/AnsiballZ_file.py'
Oct 09 09:49:46 compute-2 sudo[153646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:46 compute-2 python3.9[153648]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:47 compute-2 sudo[153646]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:47 compute-2 ceph-mon[5983]: pgmap v485: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:49:47 compute-2 sudo[153798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fntebxdxsgsljfvrfkgjloukyegcgxoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003387.09574-3339-266778862719534/AnsiballZ_file.py'
Oct 09 09:49:47 compute-2 sudo[153798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:47 compute-2 python3.9[153800]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:47 compute-2 sudo[153798]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:47 compute-2 sudo[153951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvrylfdvpfaqmfffmeoenuihdltouxcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003387.5360196-3339-69306667738466/AnsiballZ_file.py'
Oct 09 09:49:47 compute-2 sudo[153951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:47 compute-2 python3.9[153953]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:49:47 compute-2 sudo[153951]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:47.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:49:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:49:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:49:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:49:48 compute-2 sudo[154104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzsokknomdbtrzofbvorcuhpornzcskx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003388.1645274-3513-53699826136414/AnsiballZ_command.py'
Oct 09 09:49:48 compute-2 sudo[154104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:48 compute-2 python3.9[154106]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:49:48 compute-2 sudo[154104]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:48.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:49 compute-2 ceph-mon[5983]: pgmap v486: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:49 compute-2 python3.9[154258]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct 09 09:49:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:49 compute-2 sudo[154409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luqlitkfyrgbikzfyyupcfjgukthonqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003389.6962562-3567-203559303097038/AnsiballZ_systemd_service.py'
Oct 09 09:49:49 compute-2 sudo[154409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:49.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:50 compute-2 python3.9[154411]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 09:49:50 compute-2 systemd[1]: Reloading.
Oct 09 09:49:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:49:50 compute-2 systemd-rc-local-generator[154435]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:49:50 compute-2 systemd-sysv-generator[154439]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:49:50 compute-2 sudo[154447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:49:50 compute-2 sudo[154447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:49:50 compute-2 sudo[154447]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:50 compute-2 sudo[154409]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:50 compute-2 sudo[154621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlxahzccipyuopcmmoythqrwqbivtgom ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003390.572276-3591-237920708069888/AnsiballZ_command.py'
Oct 09 09:49:50 compute-2 sudo[154621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000016s ======
Oct 09 09:49:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:50.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct 09 09:49:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:50 compute-2 python3.9[154623]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:49:50 compute-2 sudo[154621]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:51 compute-2 ceph-mon[5983]: pgmap v487: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:49:51 compute-2 sudo[154774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiejxvrqzgucvwmcfxvhaciaputqavgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003391.0389414-3591-117380233538233/AnsiballZ_command.py'
Oct 09 09:49:51 compute-2 sudo[154774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:51 compute-2 python3.9[154776]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:49:51 compute-2 sudo[154774]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:51 compute-2 sudo[154928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrptnjxcuougmjgwhxifrnlolbqmzlfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003391.4964185-3591-70107732787382/AnsiballZ_command.py'
Oct 09 09:49:51 compute-2 sudo[154928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:51 compute-2 python3.9[154930]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:49:51 compute-2 sudo[154928]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:51.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:52 compute-2 sudo[155081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eviaezouavkteuuaotumyfseqtknqiem ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003391.938274-3591-192135112372749/AnsiballZ_command.py'
Oct 09 09:49:52 compute-2 sudo[155081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:52 compute-2 python3.9[155083]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:49:52 compute-2 sudo[155081]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:52 compute-2 sudo[155235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezcfbhdiwowbffeoigjdjmnmdobpmadf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003392.3832507-3591-13848207790268/AnsiballZ_command.py'
Oct 09 09:49:52 compute-2 sudo[155235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:52 compute-2 python3.9[155237]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:49:52 compute-2 sudo[155235]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:52.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:49:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:49:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:49:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:49:53 compute-2 sudo[155388]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeruuxmbjmyifecccjzvuxdiwwlrxmza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003392.8314955-3591-97170882815317/AnsiballZ_command.py'
Oct 09 09:49:53 compute-2 sudo[155388]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:53 compute-2 ceph-mon[5983]: pgmap v488: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:49:53 compute-2 python3.9[155390]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:49:53 compute-2 sudo[155388]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:53 compute-2 sudo[155550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aibyooqylgqucucaurfqeodxjgwgrxbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003393.5227842-3591-23633879485336/AnsiballZ_command.py'
Oct 09 09:49:53 compute-2 sudo[155550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:53 compute-2 podman[155516]: 2025-10-09 09:49:53.752412948 +0000 UTC m=+0.062267427 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Oct 09 09:49:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:53 compute-2 python3.9[155560]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:49:53 compute-2 sudo[155550]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:53.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:54 compute-2 sudo[155719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptixqdsxplmmcpenhutkqbsuqzitlizd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003393.9996252-3591-140023782278743/AnsiballZ_command.py'
Oct 09 09:49:54 compute-2 sudo[155719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:54 compute-2 python3.9[155721]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct 09 09:49:54 compute-2 sudo[155719]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:54.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:55 compute-2 ceph-mon[5983]: pgmap v489: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:49:55 compute-2 sudo[155873]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsdcbrrhpppkqtoasnirrzqbquavznci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003395.5036967-3798-122073098908427/AnsiballZ_file.py'
Oct 09 09:49:55 compute-2 sudo[155873]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:55 compute-2 python3.9[155875]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:49:55 compute-2 sudo[155873]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:55.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:56 compute-2 sudo[156026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njxkkvjadsfuivpaojintwrkzwdhdrko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003395.9888036-3798-27363120013600/AnsiballZ_file.py'
Oct 09 09:49:56 compute-2 sudo[156026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:56 compute-2 python3.9[156028]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:49:56 compute-2 sudo[156026]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:56.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:49:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:56 compute-2 sudo[156178]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmalpukzqhfalsacdklsbfigncvcrcal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003396.69086-3798-213084247985218/AnsiballZ_file.py'
Oct 09 09:49:56 compute-2 sudo[156178]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:57 compute-2 python3.9[156180]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:49:57 compute-2 sudo[156178]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:57 compute-2 ceph-mon[5983]: pgmap v490: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:49:57 compute-2 sudo[156330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fricjqenjigxbzbmekrxirjxsqxdkvnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003397.2061827-3864-190079593938860/AnsiballZ_file.py'
Oct 09 09:49:57 compute-2 sudo[156330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:57 compute-2 python3.9[156332]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:49:57 compute-2 sudo[156330]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:57 compute-2 sudo[156483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gezbnryujtaafywssoxjeublnrealruz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003397.6665142-3864-103255441461604/AnsiballZ_file.py'
Oct 09 09:49:57 compute-2 sudo[156483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 09 09:49:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:57.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 09 09:49:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:49:58 compute-2 python3.9[156485]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:49:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:49:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:49:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:49:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:49:58 compute-2 sudo[156483]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:58 compute-2 ceph-mon[5983]: pgmap v491: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:49:58 compute-2 sudo[156636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jsletyytxtfmjgpoxkmivboskyemmrsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003398.139013-3864-173995036238052/AnsiballZ_file.py'
Oct 09 09:49:58 compute-2 sudo[156636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:58 compute-2 python3.9[156638]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:49:58 compute-2 sudo[156636]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:58 compute-2 sudo[156788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfrzjaywklmtchlvnvqnngsyubvcxhuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003398.5999439-3864-241619206095802/AnsiballZ_file.py'
Oct 09 09:49:58 compute-2 sudo[156788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 09 09:49:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:49:58.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 09 09:49:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:58 compute-2 python3.9[156790]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:49:58 compute-2 sudo[156788]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:59 compute-2 sudo[156940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyngxmtjmhidwfpuytcqumhokqrmxqov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003399.05551-3864-270249057182058/AnsiballZ_file.py'
Oct 09 09:49:59 compute-2 sudo[156940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:59 compute-2 python3.9[156942]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:49:59 compute-2 sudo[156940]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:49:59 compute-2 sudo[157093]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvragxpnzaroaupssifvlsjkkxuufmdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003399.5064526-3864-120027589347574/AnsiballZ_file.py'
Oct 09 09:49:59 compute-2 sudo[157093]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:49:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:49:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:49:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:49:59 compute-2 python3.9[157095]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:49:59 compute-2 sudo[157093]: pam_unix(sudo:session): session closed for user root
Oct 09 09:49:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:49:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:49:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:49:59.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:00 compute-2 systemd[1]: Starting system activity accounting tool...
Oct 09 09:50:00 compute-2 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct 09 09:50:00 compute-2 systemd[1]: Finished system activity accounting tool.
Oct 09 09:50:00 compute-2 sudo[157246]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlcyxsrrptebrysykptezwhcdfydzghf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003399.956786-3864-173527271644249/AnsiballZ_file.py'
Oct 09 09:50:00 compute-2 sudo[157246]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:00 compute-2 python3.9[157249]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:50:00 compute-2 sudo[157246]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:00 compute-2 ceph-mon[5983]: pgmap v492: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:50:00 compute-2 ceph-mon[5983]: Health detail: HEALTH_WARN 1 failed cephadm daemon(s)
Oct 09 09:50:00 compute-2 ceph-mon[5983]: [WRN] CEPHADM_FAILED_DAEMON: 1 failed cephadm daemon(s)
Oct 09 09:50:00 compute-2 ceph-mon[5983]:     daemon nfs.cephfs.0.0.compute-1.douegr on compute-1 is in error state
Oct 09 09:50:00 compute-2 sudo[157399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdmsvrqacefmukybsielrqlumtymrtvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003400.5790303-3864-180133351819309/AnsiballZ_file.py'
Oct 09 09:50:00 compute-2 sudo[157399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:00.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:00 compute-2 python3.9[157401]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:50:00 compute-2 sudo[157399]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:01 compute-2 sudo[157551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wdetwbcrdhxptipbohiusgjpifknwwyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003401.0255911-3864-89265683040283/AnsiballZ_file.py'
Oct 09 09:50:01 compute-2 sudo[157551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:01 compute-2 python3.9[157553]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:50:01 compute-2 sudo[157551]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000016s ======
Oct 09 09:50:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:01.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct 09 09:50:02 compute-2 ceph-mon[5983]: pgmap v493: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:50:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 09 09:50:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:02.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 09 09:50:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:50:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:50:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:50:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:50:03 compute-2 podman[157580]: 2025-10-09 09:50:03.205337179 +0000 UTC m=+0.039792499 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, container_name=iscsid)
Oct 09 09:50:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:03.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:04 compute-2 ceph-mon[5983]: pgmap v494: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:50:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:04.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:05.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:06 compute-2 sudo[157726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjwthjmygismsjtckzydvdbszerxbhsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003405.915695-4230-31550994329349/AnsiballZ_getent.py'
Oct 09 09:50:06 compute-2 sudo[157726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:06 compute-2 python3.9[157728]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct 09 09:50:06 compute-2 sudo[157726]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:06 compute-2 ceph-mon[5983]: pgmap v495: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:50:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:06.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:06 compute-2 sudo[157879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uygkkikyutciczaujveqnhhlbwfzrmqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003406.6207876-4255-237752896958497/AnsiballZ_group.py'
Oct 09 09:50:06 compute-2 sudo[157879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:07 compute-2 python3.9[157881]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct 09 09:50:07 compute-2 groupadd[157882]: group added to /etc/group: name=nova, GID=42436
Oct 09 09:50:07 compute-2 groupadd[157882]: group added to /etc/gshadow: name=nova
Oct 09 09:50:07 compute-2 groupadd[157882]: new group: name=nova, GID=42436
Oct 09 09:50:07 compute-2 sudo[157879]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:07 compute-2 sudo[158038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrenzlamcoevmgpxsxzudsypnmvjrkxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003407.3223639-4278-276766472849941/AnsiballZ_user.py'
Oct 09 09:50:07 compute-2 sudo[158038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:07 compute-2 python3.9[158040]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-2 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct 09 09:50:07 compute-2 useradd[158042]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Oct 09 09:50:07 compute-2 useradd[158042]: add 'nova' to group 'libvirt'
Oct 09 09:50:07 compute-2 useradd[158042]: add 'nova' to shadow group 'libvirt'
Oct 09 09:50:07 compute-2 sudo[158038]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:07.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:50:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:50:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:50:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:50:08 compute-2 sshd-session[158074]: Accepted publickey for zuul from 192.168.122.30 port 48834 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 09:50:08 compute-2 systemd-logind[800]: New session 39 of user zuul.
Oct 09 09:50:08 compute-2 systemd[1]: Started Session 39 of User zuul.
Oct 09 09:50:08 compute-2 sshd-session[158074]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 09:50:08 compute-2 ceph-mon[5983]: pgmap v496: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:08 compute-2 sshd-session[158077]: Received disconnect from 192.168.122.30 port 48834:11: disconnected by user
Oct 09 09:50:08 compute-2 sshd-session[158077]: Disconnected from user zuul 192.168.122.30 port 48834
Oct 09 09:50:08 compute-2 sshd-session[158074]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:50:08 compute-2 systemd[1]: session-39.scope: Deactivated successfully.
Oct 09 09:50:08 compute-2 systemd-logind[800]: Session 39 logged out. Waiting for processes to exit.
Oct 09 09:50:08 compute-2 systemd-logind[800]: Removed session 39.
Oct 09 09:50:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:08.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:09 compute-2 python3.9[158227]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:50:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:09 compute-2 python3.9[158348]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003408.9376636-4354-218489464400853/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:50:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 09 09:50:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:09.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 09 09:50:10 compute-2 python3.9[158499]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:50:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:50:10.269 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:50:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:50:10.269 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:50:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:50:10.269 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:50:10 compute-2 python3.9[158576]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:50:10 compute-2 sudo[158577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:50:10 compute-2 sudo[158577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:50:10 compute-2 sudo[158577]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:10 compute-2 ceph-mon[5983]: pgmap v497: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 09 09:50:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:10.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 09 09:50:10 compute-2 python3.9[158751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:50:11 compute-2 python3.9[158872]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003410.5074158-4354-142180640155539/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:50:11 compute-2 python3.9[159023]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:50:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:11.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:12 compute-2 python3.9[159144]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003411.336021-4354-226122153270060/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=d01cc1b48d783e4ed08d12bb4d0a107aba230a69 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:50:12 compute-2 python3.9[159295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:50:12 compute-2 ceph-mon[5983]: pgmap v498: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:50:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:12.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:12 compute-2 python3.9[159416]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003412.1994257-4354-258665901310620/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:50:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:50:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:50:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:50:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:50:13 compute-2 sudo[159566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-haesjvtjbgqfiyrdrlsyjkbqxrqneylm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003413.2348945-4561-157623957494052/AnsiballZ_file.py'
Oct 09 09:50:13 compute-2 sudo[159566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:13 compute-2 python3.9[159568]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:50:13 compute-2 sudo[159566]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:13 compute-2 podman[159570]: 2025-10-09 09:50:13.661287643 +0000 UTC m=+0.036879943 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct 09 09:50:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 09 09:50:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:13.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 09 09:50:13 compute-2 sudo[159735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvszoyletgspkgiqcougovjfehdaqkfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003413.7723503-4585-251767463390381/AnsiballZ_copy.py'
Oct 09 09:50:13 compute-2 sudo[159735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:14 compute-2 python3.9[159737]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:50:14 compute-2 sudo[159735]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:14 compute-2 sudo[159888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xpdxoxsbxmczmpxchvdnavrgvpxzwzet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003414.3045714-4609-54762736509295/AnsiballZ_stat.py'
Oct 09 09:50:14 compute-2 sudo[159888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:14 compute-2 python3.9[159890]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:50:14 compute-2 sudo[159888]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:14 compute-2 ceph-mon[5983]: pgmap v499: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:14.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:15 compute-2 sudo[160052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgcpxbdtqjoaabjbkxwbyjkvqmidcgjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003414.819866-4634-79753486685892/AnsiballZ_stat.py'
Oct 09 09:50:15 compute-2 sudo[160052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:15 compute-2 podman[160014]: 2025-10-09 09:50:15.052559858 +0000 UTC m=+0.063203086 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 09 09:50:15 compute-2 python3.9[160059]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:50:15 compute-2 sudo[160052]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:15 compute-2 sudo[160180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dfgrbmgtcllxzlimchslbznfctgjfgax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003414.819866-4634-79753486685892/AnsiballZ_copy.py'
Oct 09 09:50:15 compute-2 sudo[160180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:15 compute-2 python3.9[160182]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1760003414.819866-4634-79753486685892/.source _original_basename=.jpwrnzve follow=False checksum=aa4ab1190f26dbb82ebaaa3faed5c12a78625b91 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct 09 09:50:15 compute-2 sudo[160180]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000016s ======
Oct 09 09:50:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:15.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000016s
Oct 09 09:50:16 compute-2 python3.9[160335]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:50:16 compute-2 ceph-mon[5983]: pgmap v500: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:50:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:16.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:16 compute-2 python3.9[160488]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:50:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:50:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:50:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:50:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:50:17 compute-2 python3.9[160609]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003416.4954014-4711-14755408612466/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=837ffd9c004e5987a2e117698c56827ebbfeb5b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:50:17 compute-2 python3.9[160760]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct 09 09:50:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:17.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:18 compute-2 python3.9[160881]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1760003417.3833544-4756-48525577080055/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=722ab36345f3375cbdcf911ce8f6e1a8083d7e59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct 09 09:50:18 compute-2 sudo[161032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vtpdlsgpiklsxkmtggabkqvsbbehkvhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003418.4934258-4807-112834821979457/AnsiballZ_container_config_data.py'
Oct 09 09:50:18 compute-2 sudo[161032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:18 compute-2 ceph-mon[5983]: pgmap v501: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:18.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:18 compute-2 python3.9[161034]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct 09 09:50:18 compute-2 sudo[161032]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:19 compute-2 sudo[161184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynnngosevxlkdmiwyfwlazerxyqrvzcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003419.0966413-4833-191961821329280/AnsiballZ_container_config_hash.py'
Oct 09 09:50:19 compute-2 sudo[161184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:19 compute-2 python3.9[161186]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 09 09:50:19 compute-2 sudo[161184]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:50:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:19.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:20 compute-2 sudo[161337]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdojfacbueianxscesqbqnxjclvtlqpj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760003419.8408303-4864-129878953747309/AnsiballZ_edpm_container_manage.py'
Oct 09 09:50:20 compute-2 sudo[161337]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:20 compute-2 python3[161339]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct 09 09:50:20 compute-2 ceph-mon[5983]: pgmap v502: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:20.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:50:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:50:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:50:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:50:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:21.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:22 compute-2 ceph-mon[5983]: pgmap v503: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:50:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 09 09:50:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:22.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 09 09:50:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:23.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:24 compute-2 podman[161375]: 2025-10-09 09:50:24.220122186 +0000 UTC m=+0.057324983 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:50:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:24 compute-2 ceph-mon[5983]: pgmap v504: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000015s ======
Oct 09 09:50:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:24.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000015s
Oct 09 09:50:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:25.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:50:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:50:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:50:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:50:26 compute-2 ceph-mon[5983]: pgmap v505: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:50:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:26.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:27.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:28 compute-2 ceph-mon[5983]: pgmap v506: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:28.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:29.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:30 compute-2 sudo[161431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:50:30 compute-2 sudo[161431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:50:30 compute-2 sudo[161431]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:50:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:30.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:50:30 compute-2 ceph-mon[5983]: pgmap v507: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:50:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:50:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:50:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:50:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:31.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:50:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:32.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:50:32 compute-2 podman[161350]: 2025-10-09 09:50:32.881423845 +0000 UTC m=+12.600853015 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 09 09:50:32 compute-2 ceph-mon[5983]: pgmap v508: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:50:32 compute-2 podman[161475]: 2025-10-09 09:50:32.976560361 +0000 UTC m=+0.028437651 container create 47c688b3f26d83022e66fed8487a9e715e5bfaa555904a8f3a63e8ffde4412fc (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:50:32 compute-2 podman[161475]: 2025-10-09 09:50:32.9630418 +0000 UTC m=+0.014919110 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 09 09:50:32 compute-2 python3[161339]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct 09 09:50:33 compute-2 sudo[161337]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:33 compute-2 sudo[161662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdoqpharsnzgjluiibdrdtllycxwugws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003433.4250045-4888-179919174843266/AnsiballZ_stat.py'
Oct 09 09:50:33 compute-2 sudo[161662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:33 compute-2 podman[161627]: 2025-10-09 09:50:33.634526495 +0000 UTC m=+0.042787828 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:50:33 compute-2 python3.9[161671]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:50:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:33 compute-2 sudo[161662]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:33.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:34 compute-2 sudo[161825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aupjgfikiyemibnbijtthywqjfyfqqjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003434.3453963-4924-14291404299627/AnsiballZ_container_config_data.py'
Oct 09 09:50:34 compute-2 sudo[161825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:34 compute-2 python3.9[161827]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct 09 09:50:34 compute-2 sudo[161825]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:34.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:34 compute-2 ceph-mon[5983]: pgmap v509: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:50:35 compute-2 sudo[161977]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnuzhytspxzaujtpzmcdsfnlzkmzhshe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003434.9432642-4951-134789943451427/AnsiballZ_container_config_hash.py'
Oct 09 09:50:35 compute-2 sudo[161977]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:35 compute-2 python3.9[161979]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct 09 09:50:35 compute-2 sudo[161977]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:35 compute-2 sudo[162130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlfzmnpyjixrbtgldnitcjruenoqytnc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1760003435.660097-4980-209310967634085/AnsiballZ_edpm_container_manage.py'
Oct 09 09:50:35 compute-2 sudo[162130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:35.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:50:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:50:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:50:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:50:36 compute-2 python3[162132]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct 09 09:50:36 compute-2 podman[162160]: 2025-10-09 09:50:36.181682672 +0000 UTC m=+0.028527981 container create b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=nova_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:50:36 compute-2 podman[162160]: 2025-10-09 09:50:36.168370851 +0000 UTC m=+0.015216160 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct 09 09:50:36 compute-2 python3[162132]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 kolla_start
Oct 09 09:50:36 compute-2 sudo[162130]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:36 compute-2 sudo[162338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uspzjcgabrqqnfqxutaokjvuvvuaahna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003436.4673107-5004-224606091807384/AnsiballZ_stat.py'
Oct 09 09:50:36 compute-2 sudo[162338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:36 compute-2 python3.9[162340]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:50:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:36.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:36 compute-2 sudo[162338]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:36 compute-2 ceph-mon[5983]: pgmap v510: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:50:37 compute-2 sudo[162465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:50:37 compute-2 sudo[162465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:50:37 compute-2 sudo[162465]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:37 compute-2 sudo[162518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skcmzsuejyhenvfkqooklbwpdaqmtvmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003437.111531-5031-102743356243307/AnsiballZ_file.py'
Oct 09 09:50:37 compute-2 sudo[162518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:37 compute-2 sudo[162517]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:50:37 compute-2 sudo[162517]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:50:37 compute-2 python3.9[162532]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:50:37 compute-2 sudo[162518]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:37 compute-2 sudo[162517]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:37 compute-2 sudo[162723]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjfpydftzznwpdzpwebmvizheyzuspue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003437.5086284-5031-30989634723154/AnsiballZ_copy.py'
Oct 09 09:50:37 compute-2 sudo[162723]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:50:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:50:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:50:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:50:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:50:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:50:37 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:50:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:50:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:37.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:50:37 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #28. Immutable memtables: 0.
Oct 09 09:50:37 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:37.998413) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:50:37 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 28
Oct 09 09:50:37 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003437998436, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4682, "num_deletes": 502, "total_data_size": 12784616, "memory_usage": 12956176, "flush_reason": "Manual Compaction"}
Oct 09 09:50:37 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #29: started
Oct 09 09:50:38 compute-2 python3.9[162725]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1760003437.5086284-5031-30989634723154/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438016523, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 29, "file_size": 8291926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13232, "largest_seqno": 17909, "table_properties": {"data_size": 8274219, "index_size": 11961, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36680, "raw_average_key_size": 19, "raw_value_size": 8237464, "raw_average_value_size": 4428, "num_data_blocks": 522, "num_entries": 1860, "num_filter_entries": 1860, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002995, "oldest_key_time": 1760002995, "file_creation_time": 1760003437, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 18141 microseconds, and 10112 cpu microseconds.
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.016553) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #29: 8291926 bytes OK
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.016566) [db/memtable_list.cc:519] [default] Level-0 commit table #29 started
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.018253) [db/memtable_list.cc:722] [default] Level-0 commit table #29: memtable #1 done
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.018265) EVENT_LOG_v1 {"time_micros": 1760003438018262, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.018276) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 12764192, prev total WAL file size 12764192, number of live WAL files 2.
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000025.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.019805) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [29(8097KB)], [27(11MB)]
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438019865, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [29], "files_L6": [27], "score": -1, "input_data_size": 19849869, "oldest_snapshot_seqno": -1}
Oct 09 09:50:38 compute-2 sudo[162723]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #30: 4994 keys, 15244081 bytes, temperature: kUnknown
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438092756, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 30, "file_size": 15244081, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15205925, "index_size": 24542, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 124761, "raw_average_key_size": 24, "raw_value_size": 15110539, "raw_average_value_size": 3025, "num_data_blocks": 1034, "num_entries": 4994, "num_filter_entries": 4994, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760003438, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 30, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.093908) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15244081 bytes
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.094988) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 272.2 rd, 209.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(7.9, 11.0 +0.0 blob) out(14.5 +0.0 blob), read-write-amplify(4.2) write-amplify(1.8) OK, records in: 6017, records dropped: 1023 output_compression: NoCompression
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.095007) EVENT_LOG_v1 {"time_micros": 1760003438094999, "job": 14, "event": "compaction_finished", "compaction_time_micros": 72926, "compaction_time_cpu_micros": 22502, "output_level": 6, "num_output_files": 1, "total_output_size": 15244081, "num_input_records": 6017, "num_output_records": 4994, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438096384, "job": 14, "event": "table_file_deletion", "file_number": 29}
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000027.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003438097817, "job": 14, "event": "table_file_deletion", "file_number": 27}
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.019741) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.097954) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.097957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.097960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.097961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:50:38 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:50:38.097962) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:50:38 compute-2 sudo[162801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nyvebbmzhdeoypjiqwmlljblkkhyghkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003437.5086284-5031-30989634723154/AnsiballZ_systemd.py'
Oct 09 09:50:38 compute-2 sudo[162801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:38 compute-2 python3.9[162803]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct 09 09:50:38 compute-2 systemd[1]: Reloading.
Oct 09 09:50:38 compute-2 systemd-rc-local-generator[162824]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:50:38 compute-2 systemd-sysv-generator[162831]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:50:38 compute-2 sudo[162801]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:38.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:38 compute-2 sudo[162912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmpmamuovpdqgcssogirukwouzrygssb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003437.5086284-5031-30989634723154/AnsiballZ_systemd.py'
Oct 09 09:50:38 compute-2 sudo[162912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:38 compute-2 ceph-mon[5983]: pgmap v511: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:38 compute-2 ceph-mon[5983]: pgmap v512: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:50:39 compute-2 python3.9[162914]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct 09 09:50:39 compute-2 systemd[1]: Reloading.
Oct 09 09:50:39 compute-2 systemd-rc-local-generator[162939]: /etc/rc.d/rc.local is not marked executable, skipping.
Oct 09 09:50:39 compute-2 systemd-sysv-generator[162942]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct 09 09:50:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:39 compute-2 systemd[1]: Starting nova_compute container...
Oct 09 09:50:39 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:50:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:39 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:39 compute-2 podman[162955]: 2025-10-09 09:50:39.605735169 +0000 UTC m=+0.072724334 container init b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, config_id=edpm, container_name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:50:39 compute-2 podman[162955]: 2025-10-09 09:50:39.61053194 +0000 UTC m=+0.077521086 container start b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=nova_compute)
Oct 09 09:50:39 compute-2 podman[162955]: nova_compute
Oct 09 09:50:39 compute-2 nova_compute[162967]: + sudo -E kolla_set_configs
Oct 09 09:50:39 compute-2 systemd[1]: Started nova_compute container.
Oct 09 09:50:39 compute-2 sudo[162912]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Validating config file
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying service configuration files
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Deleting /etc/ceph
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Creating directory /etc/ceph
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/ceph
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Writing out command to execute
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 09 09:50:39 compute-2 nova_compute[162967]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 09 09:50:39 compute-2 nova_compute[162967]: ++ cat /run_command
Oct 09 09:50:39 compute-2 nova_compute[162967]: + CMD=nova-compute
Oct 09 09:50:39 compute-2 nova_compute[162967]: + ARGS=
Oct 09 09:50:39 compute-2 nova_compute[162967]: + sudo kolla_copy_cacerts
Oct 09 09:50:39 compute-2 nova_compute[162967]: + [[ ! -n '' ]]
Oct 09 09:50:39 compute-2 nova_compute[162967]: + . kolla_extend_start
Oct 09 09:50:39 compute-2 nova_compute[162967]: Running command: 'nova-compute'
Oct 09 09:50:39 compute-2 nova_compute[162967]: + echo 'Running command: '\''nova-compute'\'''
Oct 09 09:50:39 compute-2 nova_compute[162967]: + umask 0022
Oct 09 09:50:39 compute-2 nova_compute[162967]: + exec nova-compute
Oct 09 09:50:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:39.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:40 compute-2 python3.9[163130]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:50:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:40.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:50:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:50:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:50:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:50:41 compute-2 ceph-mon[5983]: pgmap v513: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:50:41 compute-2 python3.9[163280]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:50:41 compute-2 nova_compute[162967]: 2025-10-09 09:50:41.499 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 09 09:50:41 compute-2 nova_compute[162967]: 2025-10-09 09:50:41.499 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 09 09:50:41 compute-2 nova_compute[162967]: 2025-10-09 09:50:41.500 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 09 09:50:41 compute-2 nova_compute[162967]: 2025-10-09 09:50:41.500 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 09 09:50:41 compute-2 nova_compute[162967]: 2025-10-09 09:50:41.619 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:50:41 compute-2 nova_compute[162967]: 2025-10-09 09:50:41.629 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.010s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:50:41 compute-2 sudo[163309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:50:41 compute-2 sudo[163309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:50:41 compute-2 sudo[163309]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:41.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.064 2 INFO nova.virt.driver [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.145 2 INFO nova.compute.provider_config [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.152 2 DEBUG oslo_concurrency.lockutils [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.153 2 DEBUG oslo_concurrency.lockutils [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.153 2 DEBUG oslo_concurrency.lockutils [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.153 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.153 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.153 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.154 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.155 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] console_host                   = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.156 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.157 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.158 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] host                           = compute-2.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.159 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 python3.9[163460]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.160 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.161 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.162 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.163 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.164 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] my_block_storage_ip            = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] my_ip                          = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.165 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.166 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.167 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.168 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.169 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.170 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.171 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.172 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.173 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.174 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.175 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.176 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.177 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.178 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.179 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.180 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.181 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.182 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.183 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.184 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.185 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.186 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.187 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.188 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.189 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.190 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.191 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.192 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.193 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.194 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.195 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.196 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.197 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.198 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.199 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.200 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.201 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.202 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.203 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.204 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.205 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.206 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.207 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.208 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.209 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.210 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.211 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.212 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.213 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.214 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.215 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.216 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.217 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.218 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.219 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.220 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.221 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.222 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.223 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.224 2 WARNING oslo_config.cfg [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 09 09:50:42 compute-2 nova_compute[162967]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 09 09:50:42 compute-2 nova_compute[162967]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 09 09:50:42 compute-2 nova_compute[162967]: and ``live_migration_inbound_addr`` respectively.
Oct 09 09:50:42 compute-2 nova_compute[162967]: ).  Its value may be silently ignored in the future.
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.224 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.225 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.226 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.rbd_secret_uuid        = 286f8bf0-da72-5823-9a4e-ac4457d9e609 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.227 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.228 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.229 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.230 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.231 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.232 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.233 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.234 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.235 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.236 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.237 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.238 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.239 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.240 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.241 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.242 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.243 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.244 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.245 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.246 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.247 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.248 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.249 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.250 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.251 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.252 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.253 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.254 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.255 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.256 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.257 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.258 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.259 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.259 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.259 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.259 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.259 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.259 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.260 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.260 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.260 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vnc.server_proxyclient_address = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.260 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.260 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.260 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.260 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.261 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.261 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.261 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.261 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.261 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.261 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.261 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.261 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.262 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.262 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.262 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.262 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.262 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.262 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.262 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.263 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.263 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.263 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.263 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.263 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.263 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.263 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.264 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.264 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.264 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.264 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.264 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.264 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.264 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.265 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.265 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.265 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.265 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.265 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.265 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.265 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.266 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.266 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.266 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.266 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.266 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.266 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.266 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.267 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.267 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.267 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.267 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.267 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.267 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.267 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.268 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.268 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.268 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.268 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.268 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.268 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.268 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.269 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.269 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.269 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.269 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.269 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.269 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.269 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.270 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.270 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.270 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.270 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.270 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.270 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.270 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.271 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.271 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.271 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.271 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.271 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.271 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.271 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.272 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.272 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.272 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.272 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.272 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.272 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.272 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.273 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.273 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.273 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.273 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.273 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.273 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.273 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.273 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.274 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.274 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.274 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.274 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.274 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.274 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.274 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.275 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.275 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.275 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.275 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.275 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.275 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.275 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.275 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.276 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.276 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.276 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.276 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.276 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.276 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.276 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.277 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.277 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.277 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.277 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.277 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.277 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.277 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.278 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.278 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.278 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.278 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.278 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.278 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.278 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.278 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.279 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.279 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.279 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.279 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.279 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.279 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.279 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.280 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.280 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.280 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.280 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.280 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.280 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.280 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.281 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.281 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.281 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.281 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.281 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.281 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.281 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.282 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.282 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.282 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.282 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.282 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.282 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.282 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.282 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.283 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.283 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.283 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.283 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.283 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.283 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.283 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.284 2 DEBUG oslo_service.service [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.284 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.324 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.324 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.324 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.325 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 09 09:50:42 compute-2 systemd[1]: Starting libvirt QEMU daemon...
Oct 09 09:50:42 compute-2 systemd[1]: Started libvirt QEMU daemon.
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.373 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fe1f0c504c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.375 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fe1f0c504c0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.376 2 INFO nova.virt.libvirt.driver [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Connection event '1' reason 'None'
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.386 2 WARNING nova.virt.libvirt.driver [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Cannot update service status on host "compute-2.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.
Oct 09 09:50:42 compute-2 nova_compute[162967]: 2025-10-09 09:50:42.386 2 DEBUG nova.virt.libvirt.volume.mount [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 09 09:50:42 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:50:42 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:50:42 compute-2 ceph-mon[5983]: pgmap v514: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:50:42 compute-2 sudo[163663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfljvoffurxfbudlkonhenszwcnvvnuv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003442.4829018-5212-55573838965776/AnsiballZ_podman_container.py'
Oct 09 09:50:42 compute-2 sudo[163663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:42.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:42 compute-2 python3.9[163665]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 09 09:50:42 compute-2 sudo[163663]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.061 2 INFO nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Libvirt host capabilities <capabilities>
Oct 09 09:50:43 compute-2 nova_compute[162967]: 
Oct 09 09:50:43 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <host>
Oct 09 09:50:43 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <uuid>ed712924-75ec-452a-a842-ae61b9b9ed0c</uuid>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <cpu>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <arch>x86_64</arch>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model>EPYC-Milan-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <vendor>AMD</vendor>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <microcode version='167776725'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <signature family='25' model='1' stepping='1'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <maxphysaddr mode='emulate' bits='48'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='x2apic'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='tsc-deadline'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='osxsave'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='hypervisor'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='tsc_adjust'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='ospke'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='vaes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='vpclmulqdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='spec-ctrl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='stibp'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='arch-capabilities'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='ssbd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='cmp_legacy'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='virt-ssbd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='lbrv'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='tsc-scale'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='vmcb-clean'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='pause-filter'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='pfthreshold'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='v-vmsave-vmload'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='vgif'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='rdctl-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='skip-l1dfl-vmentry'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='mds-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature name='pschange-mc-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <pages unit='KiB' size='4'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <pages unit='KiB' size='2048'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <pages unit='KiB' size='1048576'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </cpu>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <power_management>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <suspend_mem/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </power_management>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <iommu support='no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <migration_features>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <live/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <uri_transports>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <uri_transport>tcp</uri_transport>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <uri_transport>rdma</uri_transport>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </uri_transports>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </migration_features>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <topology>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <cells num='1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <cell id='0'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:           <memory unit='KiB'>7865152</memory>
Oct 09 09:50:43 compute-2 nova_compute[162967]:           <pages unit='KiB' size='4'>1966288</pages>
Oct 09 09:50:43 compute-2 nova_compute[162967]:           <pages unit='KiB' size='2048'>0</pages>
Oct 09 09:50:43 compute-2 nova_compute[162967]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 09 09:50:43 compute-2 nova_compute[162967]:           <distances>
Oct 09 09:50:43 compute-2 nova_compute[162967]:             <sibling id='0' value='10'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:           </distances>
Oct 09 09:50:43 compute-2 nova_compute[162967]:           <cpus num='4'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:           </cpus>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         </cell>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </cells>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </topology>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <cache>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </cache>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <secmodel>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model>selinux</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <doi>0</doi>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </secmodel>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <secmodel>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model>dac</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <doi>0</doi>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </secmodel>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </host>
Oct 09 09:50:43 compute-2 nova_compute[162967]: 
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <guest>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <os_type>hvm</os_type>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <arch name='i686'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <wordsize>32</wordsize>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <domain type='qemu'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <domain type='kvm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </arch>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <features>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <pae/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <nonpae/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <acpi default='on' toggle='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <apic default='on' toggle='no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <cpuselection/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <deviceboot/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <disksnapshot default='on' toggle='no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <externalSnapshot/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </features>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </guest>
Oct 09 09:50:43 compute-2 nova_compute[162967]: 
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <guest>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <os_type>hvm</os_type>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <arch name='x86_64'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <wordsize>64</wordsize>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <domain type='qemu'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <domain type='kvm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </arch>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <features>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <acpi default='on' toggle='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <apic default='on' toggle='no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <cpuselection/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <deviceboot/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <disksnapshot default='on' toggle='no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <externalSnapshot/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </features>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </guest>
Oct 09 09:50:43 compute-2 nova_compute[162967]: 
Oct 09 09:50:43 compute-2 nova_compute[162967]: </capabilities>
Oct 09 09:50:43 compute-2 nova_compute[162967]: 
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.065 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.079 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 09 09:50:43 compute-2 nova_compute[162967]: <domainCapabilities>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <path>/usr/libexec/qemu-kvm</path>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <domain>kvm</domain>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <arch>i686</arch>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <vcpu max='240'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <iothreads supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <os supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <enum name='firmware'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <loader supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='type'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>rom</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>pflash</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='readonly'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>yes</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>no</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='secure'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>no</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </loader>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </os>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <cpu>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <mode name='host-passthrough' supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='hostPassthroughMigratable'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>on</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>off</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </mode>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <mode name='maximum' supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='maximumMigratable'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>on</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>off</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </mode>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <mode name='host-model' supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model fallback='forbid'>EPYC-Milan</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <vendor>AMD</vendor>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <maxphysaddr mode='passthrough' limit='48'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='x2apic'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='tsc-deadline'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='hypervisor'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='tsc_adjust'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='vaes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='vpclmulqdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='spec-ctrl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='stibp'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='arch-capabilities'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='ssbd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='cmp_legacy'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='overflow-recov'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='succor'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='virt-ssbd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='lbrv'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='tsc-scale'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='vmcb-clean'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='flushbyasid'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='pause-filter'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='pfthreshold'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='v-vmsave-vmload'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='vgif'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='rdctl-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='mds-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='pschange-mc-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='gds-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='rfds-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </mode>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <mode name='custom' supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Broadwell'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Broadwell-IBRS'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Broadwell-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Broadwell-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-v4'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-v5'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cooperlake'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cooperlake-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cooperlake-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Denverton'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Denverton-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='EPYC-Genoa'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='auto-ibrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='EPYC-Genoa-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='auto-ibrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='EPYC-Milan-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='GraniteRapids'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='GraniteRapids-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='GraniteRapids-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx10'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx10-128'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx10-256'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx10-512'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Haswell'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Haswell-IBRS'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Haswell-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Haswell-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-noTSX'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v4'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v5'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v6'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v7'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='KnightsMill'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512er'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512pf'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='KnightsMill-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512er'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512pf'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Opteron_G4'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xop'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Opteron_G4-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xop'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Opteron_G5'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tbm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xop'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Opteron_G5-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tbm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xop'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SapphireRapids'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SapphireRapids-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SapphireRapids-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SapphireRapids-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SierraForest'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cmpccxadd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SierraForest-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cmpccxadd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Client'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Client-IBRS'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Client-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Client-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-IBRS'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-v4'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-v5'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Snowridge'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Snowridge-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Snowridge-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Snowridge-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Snowridge-v4'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='athlon'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='athlon-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='core2duo'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='core2duo-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='coreduo'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='coreduo-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='n270'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='n270-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='phenom'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='phenom-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </mode>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </cpu>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <memoryBacking supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <enum name='sourceType'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <value>file</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <value>anonymous</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <value>memfd</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </memoryBacking>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <devices>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <disk supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='diskDevice'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>disk</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>cdrom</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>floppy</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>lun</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='bus'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>ide</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>fdc</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>scsi</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>usb</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>sata</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='model'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio-transitional</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio-non-transitional</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </disk>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <graphics supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='type'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>vnc</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>egl-headless</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>dbus</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </graphics>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <video supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='modelType'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>vga</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>cirrus</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>none</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>bochs</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>ramfb</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </video>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <hostdev supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='mode'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>subsystem</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='startupPolicy'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>default</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>mandatory</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>requisite</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>optional</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='subsysType'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>usb</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>pci</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>scsi</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='capsType'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='pciBackend'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </hostdev>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <rng supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='model'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio-transitional</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio-non-transitional</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>random</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>egd</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>builtin</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </rng>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <filesystem supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='driverType'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>path</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>handle</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtiofs</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </filesystem>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <tpm supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='model'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>tpm-tis</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>tpm-crb</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>emulator</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>external</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='backendVersion'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>2.0</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </tpm>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <redirdev supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='bus'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>usb</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </redirdev>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <channel supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='type'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>pty</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>unix</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </channel>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <crypto supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='model'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='type'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>qemu</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>builtin</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </crypto>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <interface supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='backendType'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>default</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>passt</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </interface>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <panic supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='model'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>isa</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>hyperv</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </panic>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </devices>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <features>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <gic supported='no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <vmcoreinfo supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <genid supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <backingStoreInput supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <backup supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <async-teardown supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <ps2 supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <sev supported='no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <sgx supported='no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <hyperv supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='features'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>relaxed</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>vapic</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>spinlocks</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>vpindex</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>runtime</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>synic</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>stimer</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>reset</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>vendor_id</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>frequencies</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>reenlightenment</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>tlbflush</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>ipi</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>avic</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>emsr_bitmap</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>xmm_input</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </hyperv>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <launchSecurity supported='no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </features>
Oct 09 09:50:43 compute-2 nova_compute[162967]: </domainCapabilities>
Oct 09 09:50:43 compute-2 nova_compute[162967]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.082 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 09 09:50:43 compute-2 nova_compute[162967]: <domainCapabilities>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <path>/usr/libexec/qemu-kvm</path>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <domain>kvm</domain>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <arch>i686</arch>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <vcpu max='4096'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <iothreads supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <os supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <enum name='firmware'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <loader supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='type'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>rom</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>pflash</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='readonly'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>yes</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>no</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='secure'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>no</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </loader>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </os>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <cpu>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <mode name='host-passthrough' supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='hostPassthroughMigratable'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>on</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>off</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </mode>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <mode name='maximum' supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='maximumMigratable'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>on</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>off</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </mode>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <mode name='host-model' supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model fallback='forbid'>EPYC-Milan</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <vendor>AMD</vendor>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <maxphysaddr mode='passthrough' limit='48'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='x2apic'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='tsc-deadline'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='hypervisor'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='tsc_adjust'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='vaes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='vpclmulqdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='spec-ctrl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='stibp'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='arch-capabilities'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='ssbd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='cmp_legacy'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='overflow-recov'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='succor'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='virt-ssbd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='lbrv'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='tsc-scale'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='vmcb-clean'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='flushbyasid'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='pause-filter'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='pfthreshold'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='v-vmsave-vmload'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='vgif'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='rdctl-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='mds-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='pschange-mc-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='gds-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='rfds-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </mode>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <mode name='custom' supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Broadwell'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Broadwell-IBRS'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Broadwell-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Broadwell-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-v4'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-v5'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cooperlake'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cooperlake-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cooperlake-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Denverton'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Denverton-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='EPYC-Genoa'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='auto-ibrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='EPYC-Genoa-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='auto-ibrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='EPYC-Milan-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='GraniteRapids'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='GraniteRapids-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='GraniteRapids-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx10'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx10-128'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx10-256'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx10-512'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Haswell'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Haswell-IBRS'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Haswell-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Haswell-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-noTSX'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v4'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v5'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v6'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v7'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='KnightsMill'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512er'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512pf'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='KnightsMill-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512er'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512pf'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Opteron_G4'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xop'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Opteron_G4-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xop'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Opteron_G5'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tbm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xop'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Opteron_G5-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tbm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xop'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SapphireRapids'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SapphireRapids-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SapphireRapids-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SapphireRapids-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SierraForest'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cmpccxadd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SierraForest-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cmpccxadd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Client'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Client-IBRS'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Client-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Client-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-IBRS'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-v4'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-v5'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Snowridge'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Snowridge-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Snowridge-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Snowridge-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Snowridge-v4'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='athlon'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='athlon-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='core2duo'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='core2duo-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='coreduo'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='coreduo-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='n270'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='n270-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='phenom'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='phenom-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </mode>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </cpu>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <memoryBacking supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <enum name='sourceType'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <value>file</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <value>anonymous</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <value>memfd</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </memoryBacking>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <devices>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <disk supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='diskDevice'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>disk</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>cdrom</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>floppy</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>lun</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='bus'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>fdc</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>scsi</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>usb</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>sata</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='model'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio-transitional</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio-non-transitional</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </disk>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <graphics supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='type'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>vnc</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>egl-headless</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>dbus</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </graphics>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <video supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='modelType'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>vga</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>cirrus</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>none</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>bochs</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>ramfb</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </video>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <hostdev supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='mode'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>subsystem</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='startupPolicy'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>default</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>mandatory</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>requisite</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>optional</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='subsysType'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>usb</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>pci</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>scsi</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='capsType'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='pciBackend'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </hostdev>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <rng supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='model'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio-transitional</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio-non-transitional</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>random</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>egd</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>builtin</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </rng>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <filesystem supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='driverType'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>path</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>handle</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtiofs</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </filesystem>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <tpm supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='model'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>tpm-tis</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>tpm-crb</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>emulator</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>external</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='backendVersion'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>2.0</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </tpm>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <redirdev supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='bus'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>usb</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </redirdev>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <channel supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='type'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>pty</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>unix</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </channel>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <crypto supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='model'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='type'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>qemu</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>builtin</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </crypto>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <interface supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='backendType'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>default</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>passt</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </interface>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <panic supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='model'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>isa</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>hyperv</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </panic>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </devices>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <features>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <gic supported='no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <vmcoreinfo supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <genid supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <backingStoreInput supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <backup supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <async-teardown supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <ps2 supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <sev supported='no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <sgx supported='no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <hyperv supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='features'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>relaxed</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>vapic</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>spinlocks</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>vpindex</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>runtime</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>synic</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>stimer</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>reset</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>vendor_id</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>frequencies</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>reenlightenment</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>tlbflush</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>ipi</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>avic</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>emsr_bitmap</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>xmm_input</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </hyperv>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <launchSecurity supported='no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </features>
Oct 09 09:50:43 compute-2 nova_compute[162967]: </domainCapabilities>
Oct 09 09:50:43 compute-2 nova_compute[162967]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.113 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.115 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 09 09:50:43 compute-2 nova_compute[162967]: <domainCapabilities>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <path>/usr/libexec/qemu-kvm</path>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <domain>kvm</domain>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <arch>x86_64</arch>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <vcpu max='240'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <iothreads supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <os supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <enum name='firmware'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <loader supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='type'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>rom</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>pflash</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='readonly'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>yes</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>no</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='secure'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>no</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </loader>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </os>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <cpu>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <mode name='host-passthrough' supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='hostPassthroughMigratable'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>on</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>off</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </mode>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <mode name='maximum' supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='maximumMigratable'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>on</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>off</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </mode>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <mode name='host-model' supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model fallback='forbid'>EPYC-Milan</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <vendor>AMD</vendor>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <maxphysaddr mode='passthrough' limit='48'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='x2apic'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='tsc-deadline'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='hypervisor'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='tsc_adjust'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='vaes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='vpclmulqdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='spec-ctrl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='stibp'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='arch-capabilities'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='ssbd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='cmp_legacy'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='overflow-recov'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='succor'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='virt-ssbd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='lbrv'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='tsc-scale'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='vmcb-clean'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='flushbyasid'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='pause-filter'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='pfthreshold'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='v-vmsave-vmload'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='vgif'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='rdctl-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='mds-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='pschange-mc-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='gds-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='rfds-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </mode>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <mode name='custom' supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Broadwell'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Broadwell-IBRS'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Broadwell-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Broadwell-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-v4'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-v5'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cooperlake'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cooperlake-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cooperlake-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Denverton'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Denverton-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='EPYC-Genoa'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='auto-ibrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='EPYC-Genoa-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='auto-ibrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='EPYC-Milan-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='GraniteRapids'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='GraniteRapids-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='GraniteRapids-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx10'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx10-128'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx10-256'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx10-512'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Haswell'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Haswell-IBRS'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Haswell-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Haswell-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-noTSX'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v4'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v5'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v6'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v7'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='KnightsMill'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512er'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512pf'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='KnightsMill-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512er'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512pf'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Opteron_G4'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xop'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Opteron_G4-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xop'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Opteron_G5'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tbm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xop'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Opteron_G5-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tbm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xop'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SapphireRapids'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SapphireRapids-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SapphireRapids-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SapphireRapids-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SierraForest'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cmpccxadd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SierraForest-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cmpccxadd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Client'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Client-IBRS'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Client-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Client-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-IBRS'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-v4'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-v5'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Snowridge'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Snowridge-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Snowridge-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Snowridge-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Snowridge-v4'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='athlon'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='athlon-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='core2duo'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='core2duo-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='coreduo'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='coreduo-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='n270'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='n270-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='phenom'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='phenom-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </mode>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </cpu>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <memoryBacking supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <enum name='sourceType'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <value>file</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <value>anonymous</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <value>memfd</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </memoryBacking>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <devices>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <disk supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='diskDevice'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>disk</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>cdrom</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>floppy</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>lun</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='bus'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>ide</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>fdc</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>scsi</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>usb</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>sata</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='model'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio-transitional</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio-non-transitional</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </disk>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <graphics supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='type'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>vnc</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>egl-headless</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>dbus</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </graphics>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <video supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='modelType'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>vga</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>cirrus</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>none</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>bochs</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>ramfb</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </video>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <hostdev supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='mode'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>subsystem</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='startupPolicy'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>default</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>mandatory</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>requisite</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>optional</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='subsysType'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>usb</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>pci</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>scsi</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='capsType'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='pciBackend'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </hostdev>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <rng supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='model'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio-transitional</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio-non-transitional</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>random</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>egd</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>builtin</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </rng>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <filesystem supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='driverType'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>path</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>handle</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtiofs</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </filesystem>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <tpm supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='model'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>tpm-tis</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>tpm-crb</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>emulator</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>external</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='backendVersion'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>2.0</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </tpm>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <redirdev supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='bus'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>usb</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </redirdev>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <channel supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='type'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>pty</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>unix</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </channel>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <crypto supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='model'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='type'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>qemu</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>builtin</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </crypto>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <interface supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='backendType'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>default</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>passt</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </interface>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <panic supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='model'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>isa</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>hyperv</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </panic>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </devices>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <features>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <gic supported='no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <vmcoreinfo supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <genid supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <backingStoreInput supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <backup supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <async-teardown supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <ps2 supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <sev supported='no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <sgx supported='no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <hyperv supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='features'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>relaxed</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>vapic</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>spinlocks</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>vpindex</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>runtime</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>synic</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>stimer</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>reset</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>vendor_id</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>frequencies</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>reenlightenment</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>tlbflush</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>ipi</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>avic</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>emsr_bitmap</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>xmm_input</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </hyperv>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <launchSecurity supported='no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </features>
Oct 09 09:50:43 compute-2 nova_compute[162967]: </domainCapabilities>
Oct 09 09:50:43 compute-2 nova_compute[162967]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.161 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 09 09:50:43 compute-2 nova_compute[162967]: <domainCapabilities>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <path>/usr/libexec/qemu-kvm</path>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <domain>kvm</domain>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <arch>x86_64</arch>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <vcpu max='4096'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <iothreads supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <os supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <enum name='firmware'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <value>efi</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <loader supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='type'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>rom</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>pflash</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='readonly'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>yes</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>no</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='secure'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>yes</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>no</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </loader>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </os>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <cpu>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <mode name='host-passthrough' supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='hostPassthroughMigratable'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>on</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>off</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </mode>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <mode name='maximum' supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='maximumMigratable'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>on</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>off</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </mode>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <mode name='host-model' supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model fallback='forbid'>EPYC-Milan</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <vendor>AMD</vendor>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <maxphysaddr mode='passthrough' limit='48'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='x2apic'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='tsc-deadline'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='hypervisor'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='tsc_adjust'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='vaes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='vpclmulqdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='spec-ctrl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='stibp'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='arch-capabilities'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='ssbd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='cmp_legacy'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='overflow-recov'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='succor'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='virt-ssbd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='lbrv'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='tsc-scale'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='vmcb-clean'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='flushbyasid'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='pause-filter'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='pfthreshold'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='v-vmsave-vmload'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='vgif'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='rdctl-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='mds-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='pschange-mc-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='gds-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <feature policy='require' name='rfds-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </mode>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <mode name='custom' supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Broadwell'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Broadwell-IBRS'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Broadwell-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Broadwell-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-v4'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cascadelake-Server-v5'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cooperlake'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cooperlake-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Cooperlake-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Denverton'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Denverton-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='EPYC-Genoa'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='auto-ibrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='EPYC-Genoa-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='auto-ibrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='EPYC-Milan-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amd-psfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='stibp-always-on'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='GraniteRapids'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='GraniteRapids-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='GraniteRapids-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx10'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx10-128'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx10-256'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx10-512'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='prefetchiti'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Haswell'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Haswell-IBRS'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Haswell-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Haswell-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-noTSX'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v4'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v5'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v6'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Icelake-Server-v7'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='KnightsMill'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512er'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512pf'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='KnightsMill-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512er'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512pf'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Opteron_G4'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xop'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Opteron_G4-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xop'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Opteron_G5'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tbm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xop'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Opteron_G5-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fma4'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tbm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xop'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SapphireRapids'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SapphireRapids-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SapphireRapids-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SapphireRapids-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='amx-tile'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-bf16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-fp16'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bitalg'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrc'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fzrm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='la57'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='taa-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='xfd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SierraForest'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cmpccxadd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='SierraForest-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-ifma'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cmpccxadd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fbsdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='fsrs'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ibrs-all'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mcdt-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='pbrsb-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='psdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='serialize'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Client'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Client-IBRS'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Client-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Client-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-IBRS'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='hle'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='rtm'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-v4'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Skylake-Server-v5'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512bw'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512cd'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512dq'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512f'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='avx512vl'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Snowridge'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Snowridge-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='mpx'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Snowridge-v2'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Snowridge-v3'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='core-capability'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='split-lock-detect'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='Snowridge-v4'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='cldemote'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='gfni'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdir64b'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='movdiri'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='athlon'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='athlon-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='core2duo'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='core2duo-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='coreduo'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='coreduo-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='n270'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='n270-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='ss'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='phenom'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <blockers model='phenom-v1'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnow'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <feature name='3dnowext'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </blockers>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </mode>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </cpu>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <memoryBacking supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <enum name='sourceType'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <value>file</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <value>anonymous</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <value>memfd</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </memoryBacking>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <devices>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <disk supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='diskDevice'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>disk</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>cdrom</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>floppy</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>lun</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='bus'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>fdc</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>scsi</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>usb</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>sata</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='model'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio-transitional</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio-non-transitional</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </disk>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <graphics supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='type'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>vnc</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>egl-headless</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>dbus</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </graphics>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <video supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='modelType'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>vga</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>cirrus</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>none</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>bochs</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>ramfb</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </video>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <hostdev supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='mode'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>subsystem</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='startupPolicy'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>default</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>mandatory</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>requisite</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>optional</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='subsysType'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>usb</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>pci</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>scsi</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='capsType'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='pciBackend'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </hostdev>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <rng supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='model'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio-transitional</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtio-non-transitional</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>random</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>egd</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>builtin</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </rng>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <filesystem supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='driverType'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>path</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>handle</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>virtiofs</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </filesystem>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <tpm supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='model'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>tpm-tis</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>tpm-crb</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>emulator</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>external</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='backendVersion'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>2.0</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </tpm>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <redirdev supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='bus'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>usb</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </redirdev>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <channel supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='type'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>pty</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>unix</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </channel>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <crypto supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='model'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='type'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>qemu</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='backendModel'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>builtin</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </crypto>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <interface supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='backendType'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>default</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>passt</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </interface>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <panic supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='model'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>isa</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>hyperv</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </panic>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </devices>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   <features>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <gic supported='no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <vmcoreinfo supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <genid supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <backingStoreInput supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <backup supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <async-teardown supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <ps2 supported='yes'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <sev supported='no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <sgx supported='no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <hyperv supported='yes'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       <enum name='features'>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>relaxed</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>vapic</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>spinlocks</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>vpindex</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>runtime</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>synic</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>stimer</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>reset</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>vendor_id</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>frequencies</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>reenlightenment</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>tlbflush</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>ipi</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>avic</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>emsr_bitmap</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:         <value>xmm_input</value>
Oct 09 09:50:43 compute-2 nova_compute[162967]:       </enum>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     </hyperv>
Oct 09 09:50:43 compute-2 nova_compute[162967]:     <launchSecurity supported='no'/>
Oct 09 09:50:43 compute-2 nova_compute[162967]:   </features>
Oct 09 09:50:43 compute-2 nova_compute[162967]: </domainCapabilities>
Oct 09 09:50:43 compute-2 nova_compute[162967]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.196 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.197 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.197 2 DEBUG nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.197 2 INFO nova.virt.libvirt.host [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Secure Boot support detected
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.198 2 INFO nova.virt.libvirt.driver [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.198 2 INFO nova.virt.libvirt.driver [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.206 2 DEBUG nova.virt.libvirt.driver [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.222 2 INFO nova.virt.node [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Determined node identity 41a86af9-054a-49c9-9d2e-f0396c1c31a8 from /var/lib/nova/compute_id
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.238 2 WARNING nova.compute.manager [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Compute nodes ['41a86af9-054a-49c9-9d2e-f0396c1c31a8'] for host compute-2.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.290 2 INFO nova.compute.manager [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.308 2 WARNING nova.compute.manager [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] No compute node record found for host compute-2.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.308 2 DEBUG oslo_concurrency.lockutils [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.308 2 DEBUG oslo_concurrency.lockutils [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.308 2 DEBUG oslo_concurrency.lockutils [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.308 2 DEBUG nova.compute.resource_tracker [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.309 2 DEBUG oslo_concurrency.processutils [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:50:43 compute-2 sudo[163848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qywttpflnssomcyhvnhblibqwkpygjwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003443.1694643-5236-215382614638482/AnsiballZ_systemd.py'
Oct 09 09:50:43 compute-2 sudo[163848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:43 compute-2 python3.9[163850]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct 09 09:50:43 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:50:43 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2933965568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.652 2 DEBUG oslo_concurrency.processutils [None req-c8a0d0c8-1c2c-4478-8404-9ba8a3a1f814 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:50:43 compute-2 systemd[1]: Stopping nova_compute container...
Oct 09 09:50:43 compute-2 systemd[1]: Starting libvirt nodedev daemon...
Oct 09 09:50:43 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/500541388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:43 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2933965568' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:43 compute-2 systemd[1]: Started libvirt nodedev daemon.
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.713 2 DEBUG oslo_concurrency.lockutils [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.713 2 DEBUG oslo_concurrency.lockutils [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:50:43 compute-2 nova_compute[162967]: 2025-10-09 09:50:43.713 2 DEBUG oslo_concurrency.lockutils [None req-04190f80-2189-4e41-897a-8f7bff6fba95 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:50:43 compute-2 podman[163877]: 2025-10-09 09:50:43.721267578 +0000 UTC m=+0.037459335 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:50:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:43.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:44 compute-2 virtqemud[163507]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct 09 09:50:44 compute-2 virtqemud[163507]: hostname: compute-2
Oct 09 09:50:44 compute-2 virtqemud[163507]: End of file while reading data: Input/output error
Oct 09 09:50:44 compute-2 systemd[1]: libpod-b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0.scope: Deactivated successfully.
Oct 09 09:50:44 compute-2 systemd[1]: libpod-b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0.scope: Consumed 2.822s CPU time.
Oct 09 09:50:44 compute-2 conmon[162967]: conmon b4db2b5c58032a2a0063 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0.scope/container/memory.events
Oct 09 09:50:44 compute-2 podman[163876]: 2025-10-09 09:50:44.090420097 +0000 UTC m=+0.407844087 container died b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 09 09:50:44 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0-userdata-shm.mount: Deactivated successfully.
Oct 09 09:50:44 compute-2 systemd[1]: var-lib-containers-storage-overlay-b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b-merged.mount: Deactivated successfully.
Oct 09 09:50:44 compute-2 podman[163876]: 2025-10-09 09:50:44.155118433 +0000 UTC m=+0.472542423 container cleanup b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm)
Oct 09 09:50:44 compute-2 podman[163876]: nova_compute
Oct 09 09:50:44 compute-2 podman[163940]: nova_compute
Oct 09 09:50:44 compute-2 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct 09 09:50:44 compute-2 systemd[1]: Stopped nova_compute container.
Oct 09 09:50:44 compute-2 systemd[1]: Starting nova_compute container...
Oct 09 09:50:44 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:50:44 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:44 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:44 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:44 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:44 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b34e9b204fbf9e9ceaeee69e9bf99fac5311a7fff31ae75237f7439f53532b3b/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:44 compute-2 podman[163949]: 2025-10-09 09:50:44.283705909 +0000 UTC m=+0.065004755 container init b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 09 09:50:44 compute-2 podman[163949]: 2025-10-09 09:50:44.288404915 +0000 UTC m=+0.069703751 container start b4db2b5c58032a2a006362487c714b358dbd4643274bbd91605d03a9a45bebb0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 09 09:50:44 compute-2 podman[163949]: nova_compute
Oct 09 09:50:44 compute-2 nova_compute[163961]: + sudo -E kolla_set_configs
Oct 09 09:50:44 compute-2 systemd[1]: Started nova_compute container.
Oct 09 09:50:44 compute-2 sudo[163848]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Validating config file
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying service configuration files
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Deleting /etc/ceph
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Creating directory /etc/ceph
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/ceph
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Writing out command to execute
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct 09 09:50:44 compute-2 nova_compute[163961]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct 09 09:50:44 compute-2 nova_compute[163961]: ++ cat /run_command
Oct 09 09:50:44 compute-2 nova_compute[163961]: + CMD=nova-compute
Oct 09 09:50:44 compute-2 nova_compute[163961]: + ARGS=
Oct 09 09:50:44 compute-2 nova_compute[163961]: + sudo kolla_copy_cacerts
Oct 09 09:50:44 compute-2 nova_compute[163961]: + [[ ! -n '' ]]
Oct 09 09:50:44 compute-2 nova_compute[163961]: + . kolla_extend_start
Oct 09 09:50:44 compute-2 nova_compute[163961]: Running command: 'nova-compute'
Oct 09 09:50:44 compute-2 nova_compute[163961]: + echo 'Running command: '\''nova-compute'\'''
Oct 09 09:50:44 compute-2 nova_compute[163961]: + umask 0022
Oct 09 09:50:44 compute-2 nova_compute[163961]: + exec nova-compute
Oct 09 09:50:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:44 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/331903618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:44 compute-2 ceph-mon[5983]: pgmap v515: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:50:44 compute-2 sudo[164122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpnhlhrtzcsqzouextsfbuaqhcavsipp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1760003444.5234268-5263-122067618967583/AnsiballZ_podman_container.py'
Oct 09 09:50:44 compute-2 sudo[164122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 09:50:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:44.859 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:44 compute-2 python3.9[164124]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct 09 09:50:45 compute-2 systemd[1]: Started libpod-conmon-47c688b3f26d83022e66fed8487a9e715e5bfaa555904a8f3a63e8ffde4412fc.scope.
Oct 09 09:50:45 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:50:45 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196aeee98314dfe7b46ce60814ecf6481bb1b17ebc8fc5068367dc1c5add4b10/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:45 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196aeee98314dfe7b46ce60814ecf6481bb1b17ebc8fc5068367dc1c5add4b10/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:45 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/196aeee98314dfe7b46ce60814ecf6481bb1b17ebc8fc5068367dc1c5add4b10/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct 09 09:50:45 compute-2 podman[164144]: 2025-10-09 09:50:45.054169982 +0000 UTC m=+0.076052176 container init 47c688b3f26d83022e66fed8487a9e715e5bfaa555904a8f3a63e8ffde4412fc (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:50:45 compute-2 podman[164144]: 2025-10-09 09:50:45.060247196 +0000 UTC m=+0.082129370 container start 47c688b3f26d83022e66fed8487a9e715e5bfaa555904a8f3a63e8ffde4412fc (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Oct 09 09:50:45 compute-2 python3.9[164124]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct 09 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Applying nova statedir ownership
Oct 09 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct 09 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct 09 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct 09 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct 09 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct 09 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct 09 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct 09 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct 09 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct 09 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct 09 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct 09 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct 09 09:50:45 compute-2 nova_compute_init[164164]: INFO:nova_statedir:Nova statedir ownership complete
Oct 09 09:50:45 compute-2 systemd[1]: libpod-47c688b3f26d83022e66fed8487a9e715e5bfaa555904a8f3a63e8ffde4412fc.scope: Deactivated successfully.
Oct 09 09:50:45 compute-2 podman[164182]: 2025-10-09 09:50:45.142979894 +0000 UTC m=+0.020396335 container died 47c688b3f26d83022e66fed8487a9e715e5bfaa555904a8f3a63e8ffde4412fc (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct 09 09:50:45 compute-2 sudo[164122]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:45 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-47c688b3f26d83022e66fed8487a9e715e5bfaa555904a8f3a63e8ffde4412fc-userdata-shm.mount: Deactivated successfully.
Oct 09 09:50:45 compute-2 systemd[1]: var-lib-containers-storage-overlay-196aeee98314dfe7b46ce60814ecf6481bb1b17ebc8fc5068367dc1c5add4b10-merged.mount: Deactivated successfully.
Oct 09 09:50:45 compute-2 podman[164182]: 2025-10-09 09:50:45.17316611 +0000 UTC m=+0.050582530 container cleanup 47c688b3f26d83022e66fed8487a9e715e5bfaa555904a8f3a63e8ffde4412fc (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible)
Oct 09 09:50:45 compute-2 systemd[1]: libpod-conmon-47c688b3f26d83022e66fed8487a9e715e5bfaa555904a8f3a63e8ffde4412fc.scope: Deactivated successfully.
Oct 09 09:50:45 compute-2 podman[164176]: 2025-10-09 09:50:45.209729324 +0000 UTC m=+0.081758031 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:50:45 compute-2 sshd-session[128625]: Connection closed by 192.168.122.30 port 60788
Oct 09 09:50:45 compute-2 sshd-session[128622]: pam_unix(sshd:session): session closed for user zuul
Oct 09 09:50:45 compute-2 systemd[1]: session-37.scope: Deactivated successfully.
Oct 09 09:50:45 compute-2 systemd[1]: session-37.scope: Consumed 1min 57.435s CPU time.
Oct 09 09:50:45 compute-2 systemd-logind[800]: Session 37 logged out. Waiting for processes to exit.
Oct 09 09:50:45 compute-2 systemd-logind[800]: Removed session 37.
Oct 09 09:50:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:45.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:50:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:50:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:50:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.027 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.027 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.027 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.028 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.153 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.163 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.009s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.563 2 INFO nova.virt.driver [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.648 2 INFO nova.compute.provider_config [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.654 2 DEBUG oslo_concurrency.lockutils [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.654 2 DEBUG oslo_concurrency.lockutils [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.654 2 DEBUG oslo_concurrency.lockutils [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.654 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.655 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.656 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] console_host                   = compute-2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.657 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.658 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.659 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] host                           = compute-2.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.660 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.661 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.662 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.663 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.664 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.665 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] my_block_storage_ip            = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] my_ip                          = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.666 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.667 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.668 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.669 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.670 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.671 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.672 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.673 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.674 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.675 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.676 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.677 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.678 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.679 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.680 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.681 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.682 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.683 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.684 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.685 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.686 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.687 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.688 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.689 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.690 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.691 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.692 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.693 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.694 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.695 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.696 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.697 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.698 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.699 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.700 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.701 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.702 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.703 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.704 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.705 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.706 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.707 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.708 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.709 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.710 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.711 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.712 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.713 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.714 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.715 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.716 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.717 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.718 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.719 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.720 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.721 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.722 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.723 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.724 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.724 2 WARNING oslo_config.cfg [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct 09 09:50:46 compute-2 nova_compute[163961]: live_migration_uri is deprecated for removal in favor of two other options that
Oct 09 09:50:46 compute-2 nova_compute[163961]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct 09 09:50:46 compute-2 nova_compute[163961]: and ``live_migration_inbound_addr`` respectively.
Oct 09 09:50:46 compute-2 nova_compute[163961]: ).  Its value may be silently ignored in the future.
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.724 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.724 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.724 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.724 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.725 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.726 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.rbd_secret_uuid        = 286f8bf0-da72-5823-9a4e-ac4457d9e609 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.727 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.728 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.729 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.730 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.731 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.732 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.733 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.734 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.735 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.736 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.737 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.738 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.739 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.740 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.741 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.742 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.743 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.744 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.745 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.746 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.747 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.748 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.749 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.750 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.751 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.752 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.753 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.754 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.755 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.756 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.757 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.758 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.759 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vnc.server_proxyclient_address = 192.168.122.102 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.760 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.761 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.762 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.763 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.763 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.763 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.763 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.763 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.763 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.763 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.764 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.764 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.764 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.764 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.764 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.764 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.764 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.765 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.765 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.765 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.765 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.765 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.765 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.766 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.766 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.766 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.766 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.766 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.766 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.766 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.767 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.767 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.767 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.767 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.767 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.767 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.767 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.768 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.768 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.768 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.768 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.768 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.768 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.768 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.769 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.770 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.770 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.770 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.770 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.770 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.770 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.770 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.771 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.771 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.771 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.771 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.771 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.771 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.771 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.772 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.772 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.772 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.772 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.772 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.772 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.772 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.773 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.773 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.773 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.773 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.773 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.773 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.773 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.774 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.774 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.774 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.774 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.774 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.774 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.774 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.775 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.776 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.776 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.776 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.776 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.776 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.776 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.776 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.777 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.778 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.778 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.778 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.778 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.778 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.778 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.778 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.779 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.779 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.779 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.779 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.779 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.779 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.779 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.780 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.780 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.780 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.780 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.780 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.780 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.780 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.781 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.781 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.781 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.781 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.781 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.781 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.781 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.782 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.783 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.783 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.783 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] privsep_osbrick.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.783 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.783 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.783 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.783 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.784 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.784 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] nova_sys_admin.thread_pool_size = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.784 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.784 2 DEBUG oslo_service.service [None req-3ebf9c35-d525-4ba6-9d42-603599ba4ad0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.785 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.795 2 INFO nova.virt.node [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Determined node identity 41a86af9-054a-49c9-9d2e-f0396c1c31a8 from /var/lib/nova/compute_id
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.795 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.796 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.796 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.796 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.804 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f32e4a6a5e0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.806 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f32e4a6a5e0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.806 2 INFO nova.virt.libvirt.driver [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Connection event '1' reason 'None'
Oct 09 09:50:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.810 2 INFO nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Libvirt host capabilities <capabilities>
Oct 09 09:50:46 compute-2 nova_compute[163961]: 
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <host>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <uuid>ed712924-75ec-452a-a842-ae61b9b9ed0c</uuid>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <cpu>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <arch>x86_64</arch>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model>EPYC-Milan-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <vendor>AMD</vendor>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <microcode version='167776725'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <signature family='25' model='1' stepping='1'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <topology sockets='4' dies='1' clusters='1' cores='1' threads='1'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <maxphysaddr mode='emulate' bits='48'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='x2apic'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='tsc-deadline'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='osxsave'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='hypervisor'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='tsc_adjust'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='ospke'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='vaes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='vpclmulqdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='spec-ctrl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='stibp'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='arch-capabilities'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='ssbd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='cmp_legacy'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='virt-ssbd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='lbrv'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='tsc-scale'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='vmcb-clean'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='pause-filter'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='pfthreshold'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='v-vmsave-vmload'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='vgif'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='rdctl-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='skip-l1dfl-vmentry'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='mds-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature name='pschange-mc-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <pages unit='KiB' size='4'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <pages unit='KiB' size='2048'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <pages unit='KiB' size='1048576'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </cpu>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <power_management>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <suspend_mem/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </power_management>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <iommu support='no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <migration_features>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <live/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <uri_transports>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <uri_transport>tcp</uri_transport>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <uri_transport>rdma</uri_transport>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </uri_transports>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </migration_features>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <topology>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <cells num='1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <cell id='0'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:           <memory unit='KiB'>7865152</memory>
Oct 09 09:50:46 compute-2 nova_compute[163961]:           <pages unit='KiB' size='4'>1966288</pages>
Oct 09 09:50:46 compute-2 nova_compute[163961]:           <pages unit='KiB' size='2048'>0</pages>
Oct 09 09:50:46 compute-2 nova_compute[163961]:           <pages unit='KiB' size='1048576'>0</pages>
Oct 09 09:50:46 compute-2 nova_compute[163961]:           <distances>
Oct 09 09:50:46 compute-2 nova_compute[163961]:             <sibling id='0' value='10'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:           </distances>
Oct 09 09:50:46 compute-2 nova_compute[163961]:           <cpus num='4'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:           </cpus>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         </cell>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </cells>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </topology>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <cache>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </cache>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <secmodel>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model>selinux</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <doi>0</doi>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </secmodel>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <secmodel>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model>dac</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <doi>0</doi>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <baselabel type='kvm'>+107:+107</baselabel>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <baselabel type='qemu'>+107:+107</baselabel>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </secmodel>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </host>
Oct 09 09:50:46 compute-2 nova_compute[163961]: 
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <guest>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <os_type>hvm</os_type>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <arch name='i686'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <wordsize>32</wordsize>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <domain type='qemu'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <domain type='kvm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </arch>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <features>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <pae/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <nonpae/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <acpi default='on' toggle='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <apic default='on' toggle='no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <cpuselection/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <deviceboot/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <disksnapshot default='on' toggle='no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <externalSnapshot/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </features>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </guest>
Oct 09 09:50:46 compute-2 nova_compute[163961]: 
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <guest>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <os_type>hvm</os_type>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <arch name='x86_64'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <wordsize>64</wordsize>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <domain type='qemu'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <domain type='kvm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </arch>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <features>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <acpi default='on' toggle='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <apic default='on' toggle='no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <cpuselection/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <deviceboot/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <disksnapshot default='on' toggle='no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <externalSnapshot/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </features>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </guest>
Oct 09 09:50:46 compute-2 nova_compute[163961]: 
Oct 09 09:50:46 compute-2 nova_compute[163961]: </capabilities>
Oct 09 09:50:46 compute-2 nova_compute[163961]: 
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.814 2 DEBUG nova.virt.libvirt.volume.mount [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.815 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.820 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct 09 09:50:46 compute-2 nova_compute[163961]: <domainCapabilities>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <path>/usr/libexec/qemu-kvm</path>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <domain>kvm</domain>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <arch>i686</arch>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <vcpu max='4096'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <iothreads supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <os supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <enum name='firmware'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <loader supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='type'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>rom</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>pflash</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='readonly'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>yes</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>no</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='secure'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>no</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </loader>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </os>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <cpu>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <mode name='host-passthrough' supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='hostPassthroughMigratable'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>on</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>off</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </mode>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <mode name='maximum' supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='maximumMigratable'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>on</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>off</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </mode>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <mode name='host-model' supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model fallback='forbid'>EPYC-Milan</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <vendor>AMD</vendor>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <maxphysaddr mode='passthrough' limit='48'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='x2apic'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='tsc-deadline'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='hypervisor'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='tsc_adjust'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='vaes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='vpclmulqdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='spec-ctrl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='stibp'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='arch-capabilities'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='ssbd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='cmp_legacy'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='overflow-recov'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='succor'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='virt-ssbd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='lbrv'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='tsc-scale'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='vmcb-clean'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='flushbyasid'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='pause-filter'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='pfthreshold'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='v-vmsave-vmload'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='vgif'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='rdctl-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='mds-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='pschange-mc-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='gds-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='rfds-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </mode>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <mode name='custom' supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Broadwell'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Broadwell-IBRS'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Broadwell-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Broadwell-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-v4'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-v5'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cooperlake'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cooperlake-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cooperlake-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Denverton'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Denverton-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='EPYC-Genoa'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='auto-ibrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='EPYC-Genoa-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='auto-ibrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='EPYC-Milan-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='GraniteRapids'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='GraniteRapids-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='GraniteRapids-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx10'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx10-128'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx10-256'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx10-512'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Haswell'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Haswell-IBRS'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Haswell-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Haswell-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-noTSX'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v4'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v5'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v6'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v7'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='KnightsMill'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512er'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512pf'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='KnightsMill-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512er'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512pf'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Opteron_G4'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xop'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Opteron_G4-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xop'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Opteron_G5'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tbm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xop'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Opteron_G5-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tbm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xop'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SapphireRapids'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SapphireRapids-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SapphireRapids-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SapphireRapids-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SierraForest'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cmpccxadd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SierraForest-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cmpccxadd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Client'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Client-IBRS'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Client-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Client-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-IBRS'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-v4'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-v5'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Snowridge'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Snowridge-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Snowridge-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Snowridge-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Snowridge-v4'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='athlon'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='athlon-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='core2duo'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='core2duo-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='coreduo'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='coreduo-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='n270'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='n270-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='phenom'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='phenom-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </mode>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </cpu>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <memoryBacking supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <enum name='sourceType'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <value>file</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <value>anonymous</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <value>memfd</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </memoryBacking>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <devices>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <disk supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='diskDevice'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>disk</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>cdrom</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>floppy</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>lun</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='bus'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>fdc</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>scsi</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>usb</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>sata</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='model'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio-transitional</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio-non-transitional</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </disk>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <graphics supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='type'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>vnc</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>egl-headless</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>dbus</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </graphics>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <video supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='modelType'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>vga</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>cirrus</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>none</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>bochs</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>ramfb</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </video>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <hostdev supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='mode'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>subsystem</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='startupPolicy'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>default</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>mandatory</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>requisite</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>optional</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='subsysType'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>usb</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>pci</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>scsi</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='capsType'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='pciBackend'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </hostdev>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <rng supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='model'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio-transitional</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio-non-transitional</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>random</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>egd</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>builtin</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </rng>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <filesystem supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='driverType'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>path</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>handle</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtiofs</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </filesystem>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <tpm supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='model'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>tpm-tis</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>tpm-crb</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>emulator</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>external</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='backendVersion'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>2.0</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </tpm>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <redirdev supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='bus'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>usb</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </redirdev>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <channel supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='type'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>pty</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>unix</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </channel>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <crypto supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='model'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='type'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>qemu</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>builtin</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </crypto>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <interface supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='backendType'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>default</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>passt</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </interface>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <panic supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='model'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>isa</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>hyperv</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </panic>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </devices>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <features>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <gic supported='no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <vmcoreinfo supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <genid supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <backingStoreInput supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <backup supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <async-teardown supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <ps2 supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <sev supported='no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <sgx supported='no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <hyperv supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='features'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>relaxed</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>vapic</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>spinlocks</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>vpindex</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>runtime</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>synic</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>stimer</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>reset</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>vendor_id</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>frequencies</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>reenlightenment</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>tlbflush</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>ipi</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>avic</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>emsr_bitmap</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>xmm_input</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </hyperv>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <launchSecurity supported='no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </features>
Oct 09 09:50:46 compute-2 nova_compute[163961]: </domainCapabilities>
Oct 09 09:50:46 compute-2 nova_compute[163961]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.824 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct 09 09:50:46 compute-2 nova_compute[163961]: <domainCapabilities>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <path>/usr/libexec/qemu-kvm</path>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <domain>kvm</domain>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <arch>i686</arch>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <vcpu max='240'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <iothreads supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <os supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <enum name='firmware'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <loader supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='type'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>rom</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>pflash</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='readonly'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>yes</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>no</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='secure'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>no</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </loader>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </os>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <cpu>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <mode name='host-passthrough' supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='hostPassthroughMigratable'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>on</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>off</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </mode>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <mode name='maximum' supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='maximumMigratable'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>on</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>off</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </mode>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <mode name='host-model' supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model fallback='forbid'>EPYC-Milan</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <vendor>AMD</vendor>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <maxphysaddr mode='passthrough' limit='48'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='x2apic'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='tsc-deadline'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='hypervisor'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='tsc_adjust'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='vaes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='vpclmulqdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='spec-ctrl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='stibp'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='arch-capabilities'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='ssbd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='cmp_legacy'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='overflow-recov'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='succor'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='virt-ssbd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='lbrv'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='tsc-scale'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='vmcb-clean'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='flushbyasid'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='pause-filter'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='pfthreshold'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='v-vmsave-vmload'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='vgif'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='rdctl-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='mds-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='pschange-mc-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='gds-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='rfds-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </mode>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <mode name='custom' supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Broadwell'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Broadwell-IBRS'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 ceph-mon[5983]: pgmap v516: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Broadwell-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Broadwell-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-v4'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-v5'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cooperlake'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cooperlake-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cooperlake-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Denverton'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Denverton-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='EPYC-Genoa'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='auto-ibrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='EPYC-Genoa-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='auto-ibrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='EPYC-Milan-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='GraniteRapids'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:46.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='GraniteRapids-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='GraniteRapids-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx10'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx10-128'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx10-256'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx10-512'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Haswell'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Haswell-IBRS'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Haswell-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Haswell-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-noTSX'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v4'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v5'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v6'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v7'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='KnightsMill'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512er'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512pf'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='KnightsMill-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512er'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512pf'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Opteron_G4'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xop'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Opteron_G4-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xop'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Opteron_G5'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tbm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xop'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Opteron_G5-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tbm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xop'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SapphireRapids'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SapphireRapids-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SapphireRapids-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SapphireRapids-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SierraForest'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cmpccxadd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SierraForest-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cmpccxadd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Client'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Client-IBRS'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Client-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Client-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-IBRS'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-v4'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-v5'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Snowridge'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Snowridge-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Snowridge-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Snowridge-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Snowridge-v4'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='athlon'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='athlon-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='core2duo'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='core2duo-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='coreduo'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='coreduo-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='n270'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='n270-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='phenom'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='phenom-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </mode>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </cpu>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <memoryBacking supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <enum name='sourceType'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <value>file</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <value>anonymous</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <value>memfd</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </memoryBacking>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <devices>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <disk supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='diskDevice'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>disk</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>cdrom</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>floppy</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>lun</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='bus'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>ide</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>fdc</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>scsi</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>usb</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>sata</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='model'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio-transitional</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio-non-transitional</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </disk>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <graphics supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='type'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>vnc</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>egl-headless</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>dbus</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </graphics>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <video supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='modelType'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>vga</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>cirrus</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>none</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>bochs</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>ramfb</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </video>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <hostdev supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='mode'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>subsystem</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='startupPolicy'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>default</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>mandatory</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>requisite</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>optional</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='subsysType'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>usb</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>pci</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>scsi</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='capsType'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='pciBackend'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </hostdev>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <rng supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='model'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio-transitional</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio-non-transitional</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>random</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>egd</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>builtin</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </rng>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <filesystem supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='driverType'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>path</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>handle</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtiofs</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </filesystem>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <tpm supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='model'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>tpm-tis</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>tpm-crb</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>emulator</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>external</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='backendVersion'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>2.0</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </tpm>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <redirdev supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='bus'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>usb</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </redirdev>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <channel supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='type'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>pty</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>unix</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </channel>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <crypto supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='model'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='type'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>qemu</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>builtin</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </crypto>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <interface supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='backendType'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>default</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>passt</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </interface>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <panic supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='model'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>isa</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>hyperv</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </panic>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </devices>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <features>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <gic supported='no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <vmcoreinfo supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <genid supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <backingStoreInput supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <backup supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <async-teardown supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <ps2 supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <sev supported='no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <sgx supported='no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <hyperv supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='features'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>relaxed</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>vapic</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>spinlocks</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>vpindex</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>runtime</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>synic</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>stimer</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>reset</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>vendor_id</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>frequencies</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>reenlightenment</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>tlbflush</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>ipi</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>avic</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>emsr_bitmap</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>xmm_input</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </hyperv>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <launchSecurity supported='no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </features>
Oct 09 09:50:46 compute-2 nova_compute[163961]: </domainCapabilities>
Oct 09 09:50:46 compute-2 nova_compute[163961]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.826 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.828 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct 09 09:50:46 compute-2 nova_compute[163961]: <domainCapabilities>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <path>/usr/libexec/qemu-kvm</path>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <domain>kvm</domain>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <machine>pc-q35-rhel9.6.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <arch>x86_64</arch>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <vcpu max='4096'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <iothreads supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <os supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <enum name='firmware'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <value>efi</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <loader supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='type'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>rom</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>pflash</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='readonly'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>yes</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>no</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='secure'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>yes</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>no</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </loader>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </os>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <cpu>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <mode name='host-passthrough' supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='hostPassthroughMigratable'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>on</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>off</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </mode>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <mode name='maximum' supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='maximumMigratable'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>on</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>off</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </mode>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <mode name='host-model' supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model fallback='forbid'>EPYC-Milan</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <vendor>AMD</vendor>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <maxphysaddr mode='passthrough' limit='48'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='x2apic'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='tsc-deadline'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='hypervisor'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='tsc_adjust'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='vaes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='vpclmulqdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='spec-ctrl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='stibp'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='arch-capabilities'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='ssbd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='cmp_legacy'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='overflow-recov'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='succor'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='virt-ssbd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='lbrv'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='tsc-scale'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='vmcb-clean'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='flushbyasid'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='pause-filter'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='pfthreshold'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='v-vmsave-vmload'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='vgif'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='rdctl-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='mds-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='pschange-mc-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='gds-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='rfds-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </mode>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <mode name='custom' supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Broadwell'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Broadwell-IBRS'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Broadwell-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Broadwell-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-v4'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-v5'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cooperlake'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cooperlake-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cooperlake-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Denverton'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Denverton-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='EPYC-Genoa'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='auto-ibrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='EPYC-Genoa-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='auto-ibrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='EPYC-Milan-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='GraniteRapids'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='GraniteRapids-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='GraniteRapids-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx10'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx10-128'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx10-256'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx10-512'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Haswell'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Haswell-IBRS'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Haswell-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Haswell-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-noTSX'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v4'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v5'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v6'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v7'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='KnightsMill'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512er'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512pf'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='KnightsMill-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512er'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512pf'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Opteron_G4'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xop'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Opteron_G4-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xop'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Opteron_G5'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tbm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xop'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Opteron_G5-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tbm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xop'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SapphireRapids'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SapphireRapids-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SapphireRapids-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SapphireRapids-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SierraForest'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cmpccxadd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SierraForest-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cmpccxadd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Client'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Client-IBRS'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Client-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Client-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-IBRS'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-v4'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-v5'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Snowridge'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Snowridge-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Snowridge-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Snowridge-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Snowridge-v4'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='athlon'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='athlon-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='core2duo'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='core2duo-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='coreduo'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='coreduo-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='n270'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='n270-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='phenom'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='phenom-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </mode>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </cpu>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <memoryBacking supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <enum name='sourceType'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <value>file</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <value>anonymous</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <value>memfd</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </memoryBacking>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <devices>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <disk supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='diskDevice'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>disk</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>cdrom</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>floppy</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>lun</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='bus'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>fdc</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>scsi</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>usb</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>sata</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='model'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio-transitional</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio-non-transitional</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </disk>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <graphics supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='type'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>vnc</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>egl-headless</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>dbus</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </graphics>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <video supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='modelType'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>vga</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>cirrus</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>none</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>bochs</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>ramfb</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </video>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <hostdev supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='mode'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>subsystem</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='startupPolicy'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>default</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>mandatory</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>requisite</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>optional</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='subsysType'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>usb</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>pci</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>scsi</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='capsType'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='pciBackend'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </hostdev>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <rng supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='model'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio-transitional</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio-non-transitional</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>random</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>egd</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>builtin</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </rng>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <filesystem supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='driverType'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>path</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>handle</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtiofs</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </filesystem>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <tpm supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='model'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>tpm-tis</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>tpm-crb</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>emulator</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>external</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='backendVersion'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>2.0</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </tpm>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <redirdev supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='bus'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>usb</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </redirdev>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <channel supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='type'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>pty</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>unix</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </channel>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <crypto supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='model'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='type'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>qemu</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>builtin</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </crypto>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <interface supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='backendType'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>default</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>passt</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </interface>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <panic supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='model'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>isa</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>hyperv</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </panic>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </devices>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <features>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <gic supported='no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <vmcoreinfo supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <genid supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <backingStoreInput supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <backup supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <async-teardown supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <ps2 supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <sev supported='no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <sgx supported='no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <hyperv supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='features'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>relaxed</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>vapic</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>spinlocks</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>vpindex</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>runtime</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>synic</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>stimer</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>reset</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>vendor_id</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>frequencies</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>reenlightenment</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>tlbflush</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>ipi</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>avic</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>emsr_bitmap</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>xmm_input</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </hyperv>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <launchSecurity supported='no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </features>
Oct 09 09:50:46 compute-2 nova_compute[163961]: </domainCapabilities>
Oct 09 09:50:46 compute-2 nova_compute[163961]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.880 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct 09 09:50:46 compute-2 nova_compute[163961]: <domainCapabilities>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <path>/usr/libexec/qemu-kvm</path>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <domain>kvm</domain>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <machine>pc-i440fx-rhel7.6.0</machine>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <arch>x86_64</arch>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <vcpu max='240'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <iothreads supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <os supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <enum name='firmware'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <loader supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='type'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>rom</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>pflash</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='readonly'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>yes</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>no</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='secure'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>no</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </loader>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </os>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <cpu>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <mode name='host-passthrough' supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='hostPassthroughMigratable'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>on</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>off</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </mode>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <mode name='maximum' supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='maximumMigratable'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>on</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>off</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </mode>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <mode name='host-model' supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model fallback='forbid'>EPYC-Milan</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <vendor>AMD</vendor>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <maxphysaddr mode='passthrough' limit='48'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='x2apic'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='tsc-deadline'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='hypervisor'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='tsc_adjust'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='vaes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='vpclmulqdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='spec-ctrl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='stibp'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='arch-capabilities'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='ssbd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='cmp_legacy'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='overflow-recov'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='succor'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='virt-ssbd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='lbrv'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='tsc-scale'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='vmcb-clean'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='flushbyasid'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='pause-filter'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='pfthreshold'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='v-vmsave-vmload'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='vgif'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='lfence-always-serializing'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='rdctl-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='mds-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='pschange-mc-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='gds-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <feature policy='require' name='rfds-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </mode>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <mode name='custom' supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Broadwell'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Broadwell-IBRS'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Broadwell-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Broadwell-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Broadwell-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Broadwell-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-noTSX'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-v4'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cascadelake-Server-v5'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cooperlake'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cooperlake-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Cooperlake-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Denverton'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Denverton-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Denverton-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Denverton-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Hygon'>Dhyana-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='EPYC-Genoa'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='auto-ibrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='EPYC-Genoa-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='auto-ibrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-Milan-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='EPYC-Milan-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amd-psfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='no-nested-data-bp'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='null-sel-clr-base'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='stibp-always-on'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='AMD'>EPYC-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='GraniteRapids'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='GraniteRapids-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='GraniteRapids-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx10'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx10-128'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx10-256'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx10-512'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='prefetchiti'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Haswell'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Haswell-IBRS'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Haswell-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Haswell-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Haswell-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Haswell-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-noTSX'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v4'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v5'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v6'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Icelake-Server-v7'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>IvyBridge-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>IvyBridge-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='KnightsMill'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512er'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512pf'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='KnightsMill-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-4fmaps'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-4vnniw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512er'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512pf'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Opteron_G4'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xop'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Opteron_G4-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xop'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Opteron_G5'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tbm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xop'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Opteron_G5-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fma4'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tbm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xop'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SapphireRapids'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SapphireRapids-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SapphireRapids-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SapphireRapids-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='amx-tile'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-bf16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-fp16'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512-vpopcntdq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bitalg'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vbmi2'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrc'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fzrm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='la57'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='taa-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='tsx-ldtrk'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='xfd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SierraForest'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cmpccxadd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='SierraForest-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-ifma'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-ne-convert'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx-vnni-int8'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='bus-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cmpccxadd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fbsdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='fsrs'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ibrs-all'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mcdt-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='pbrsb-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='psdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='sbdr-ssdp-no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='serialize'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Client'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Client-IBRS'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Client-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Client-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Skylake-Client-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Skylake-Client-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-IBRS'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='hle'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='rtm'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-v4'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Skylake-Server-v5'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512bw'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512cd'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512dq'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512f'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='avx512vl'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Snowridge'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Snowridge-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='mpx'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Snowridge-v2'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Snowridge-v3'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='core-capability'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='split-lock-detect'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='Snowridge-v4'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='cldemote'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='gfni'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdir64b'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='movdiri'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='athlon'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='athlon-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='core2duo'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='core2duo-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='coreduo'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='coreduo-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='n270'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='n270-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='ss'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='phenom'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <blockers model='phenom-v1'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnow'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <feature name='3dnowext'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </blockers>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </mode>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </cpu>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <memoryBacking supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <enum name='sourceType'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <value>file</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <value>anonymous</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <value>memfd</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </memoryBacking>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <devices>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <disk supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='diskDevice'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>disk</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>cdrom</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>floppy</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>lun</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='bus'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>ide</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>fdc</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>scsi</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>usb</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>sata</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='model'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio-transitional</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio-non-transitional</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </disk>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <graphics supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='type'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>vnc</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>egl-headless</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>dbus</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </graphics>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <video supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='modelType'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>vga</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>cirrus</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>none</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>bochs</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>ramfb</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </video>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <hostdev supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='mode'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>subsystem</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='startupPolicy'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>default</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>mandatory</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>requisite</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>optional</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='subsysType'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>usb</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>pci</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>scsi</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='capsType'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='pciBackend'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </hostdev>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <rng supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='model'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio-transitional</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtio-non-transitional</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>random</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>egd</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>builtin</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </rng>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <filesystem supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='driverType'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>path</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>handle</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>virtiofs</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </filesystem>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <tpm supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='model'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>tpm-tis</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>tpm-crb</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>emulator</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>external</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='backendVersion'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>2.0</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </tpm>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <redirdev supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='bus'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>usb</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </redirdev>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <channel supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='type'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>pty</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>unix</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </channel>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <crypto supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='model'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='type'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>qemu</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='backendModel'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>builtin</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </crypto>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <interface supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='backendType'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>default</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>passt</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </interface>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <panic supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='model'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>isa</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>hyperv</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </panic>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </devices>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   <features>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <gic supported='no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <vmcoreinfo supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <genid supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <backingStoreInput supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <backup supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <async-teardown supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <ps2 supported='yes'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <sev supported='no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <sgx supported='no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <hyperv supported='yes'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       <enum name='features'>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>relaxed</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>vapic</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>spinlocks</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>vpindex</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>runtime</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>synic</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>stimer</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>reset</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>vendor_id</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>frequencies</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>reenlightenment</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>tlbflush</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>ipi</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>avic</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>emsr_bitmap</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:         <value>xmm_input</value>
Oct 09 09:50:46 compute-2 nova_compute[163961]:       </enum>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     </hyperv>
Oct 09 09:50:46 compute-2 nova_compute[163961]:     <launchSecurity supported='no'/>
Oct 09 09:50:46 compute-2 nova_compute[163961]:   </features>
Oct 09 09:50:46 compute-2 nova_compute[163961]: </domainCapabilities>
Oct 09 09:50:46 compute-2 nova_compute[163961]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.920 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.921 2 INFO nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Secure Boot support detected
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.922 2 INFO nova.virt.libvirt.driver [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.922 2 INFO nova.virt.libvirt.driver [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.927 2 DEBUG nova.virt.libvirt.driver [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.957 2 INFO nova.virt.node [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Determined node identity 41a86af9-054a-49c9-9d2e-f0396c1c31a8 from /var/lib/nova/compute_id
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.968 2 WARNING nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Compute nodes ['41a86af9-054a-49c9-9d2e-f0396c1c31a8'] for host compute-2.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.986 2 INFO nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.995 2 WARNING nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] No compute node record found for host compute-2.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-2.ctlplane.example.com could not be found.
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.995 2 DEBUG oslo_concurrency.lockutils [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.996 2 DEBUG oslo_concurrency.lockutils [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.996 2 DEBUG oslo_concurrency.lockutils [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.996 2 DEBUG nova.compute.resource_tracker [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:50:46 compute-2 nova_compute[163961]: 2025-10-09 09:50:46.996 2 DEBUG oslo_concurrency.processutils [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:50:47 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:50:47 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/173428441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:47.345 2 DEBUG oslo_concurrency.processutils [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:47.542 2 WARNING nova.virt.libvirt.driver [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:47.543 2 DEBUG nova.compute.resource_tracker [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5270MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:47.543 2 DEBUG oslo_concurrency.lockutils [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:47.543 2 DEBUG oslo_concurrency.lockutils [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:47.560 2 WARNING nova.compute.resource_tracker [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] No compute node record for compute-2.ctlplane.example.com:41a86af9-054a-49c9-9d2e-f0396c1c31a8: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 41a86af9-054a-49c9-9d2e-f0396c1c31a8 could not be found.
Oct 09 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:47.602 2 INFO nova.compute.resource_tracker [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Compute node record created for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com with uuid: 41a86af9-054a-49c9-9d2e-f0396c1c31a8
Oct 09 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:47.673 2 DEBUG nova.compute.resource_tracker [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 09:50:47 compute-2 nova_compute[163961]: 2025-10-09 09:50:47.674 2 DEBUG nova.compute.resource_tracker [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 09:50:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:47 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1952717457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:47 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/173428441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:47 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2700888579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:47.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.135 2 INFO nova.scheduler.client.report [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [req-152ce9ff-fe8e-46b9-be29-36d3362c5e96] Created resource provider record via placement API for resource provider with UUID 41a86af9-054a-49c9-9d2e-f0396c1c31a8 and name compute-2.ctlplane.example.com.
Oct 09 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.187 2 DEBUG oslo_concurrency.processutils [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:50:48 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:50:48 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2871433844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.523 2 DEBUG oslo_concurrency.processutils [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.527 2 DEBUG nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct 09 09:50:48 compute-2 nova_compute[163961]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Oct 09 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.527 2 INFO nova.virt.libvirt.host [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] kernel doesn't support AMD SEV
Oct 09 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.528 2 DEBUG nova.compute.provider_tree [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Updating inventory in ProviderTree for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 09 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.528 2 DEBUG nova.virt.libvirt.driver [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 09 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.598 2 DEBUG nova.scheduler.client.report [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Updated inventory for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 09 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.598 2 DEBUG nova.compute.provider_tree [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Updating resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 09 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.599 2 DEBUG nova.compute.provider_tree [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Updating inventory in ProviderTree for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 09 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.676 2 DEBUG nova.compute.provider_tree [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Updating resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 09 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.692 2 DEBUG nova.compute.resource_tracker [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.692 2 DEBUG oslo_concurrency.lockutils [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.692 2 DEBUG nova.service [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Oct 09 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.736 2 DEBUG nova.service [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Oct 09 09:50:48 compute-2 nova_compute[163961]: 2025-10-09 09:50:48.736 2 DEBUG nova.servicegroup.drivers.db [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] DB_Driver: join new ServiceGroup member compute-2.ctlplane.example.com to the compute group, service = <Service: host=compute-2.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Oct 09 09:50:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:48.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:48 compute-2 ceph-mon[5983]: pgmap v517: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:50:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2233634120' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2871433844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/4019565509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:50:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:50:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:49.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:50 compute-2 sudo[164312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:50:50 compute-2 sudo[164312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:50:50 compute-2 sudo[164312]: pam_unix(sudo:session): session closed for user root
Oct 09 09:50:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:50.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:50 compute-2 ceph-mon[5983]: pgmap v518: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:50:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:50:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:50:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:50:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:51.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:52.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:52 compute-2 ceph-mon[5983]: pgmap v519: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:50:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:50:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:53.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:50:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:54.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:54 compute-2 ceph-mon[5983]: pgmap v520: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:50:55 compute-2 podman[164341]: 2025-10-09 09:50:55.225726305 +0000 UTC m=+0.060248469 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller)
Oct 09 09:50:55 compute-2 systemd[1]: Stopping User Manager for UID 1000...
Oct 09 09:50:55 compute-2 systemd[1270]: Activating special unit Exit the Session...
Oct 09 09:50:55 compute-2 systemd[1270]: Removed slice User Background Tasks Slice.
Oct 09 09:50:55 compute-2 systemd[1270]: Stopped target Main User Target.
Oct 09 09:50:55 compute-2 systemd[1270]: Stopped target Basic System.
Oct 09 09:50:55 compute-2 systemd[1270]: Stopped target Paths.
Oct 09 09:50:55 compute-2 systemd[1270]: Stopped target Sockets.
Oct 09 09:50:55 compute-2 systemd[1270]: Stopped target Timers.
Oct 09 09:50:55 compute-2 systemd[1270]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 09 09:50:55 compute-2 systemd[1270]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 09 09:50:55 compute-2 systemd[1270]: Closed D-Bus User Message Bus Socket.
Oct 09 09:50:55 compute-2 systemd[1270]: Stopped Create User's Volatile Files and Directories.
Oct 09 09:50:55 compute-2 systemd[1270]: Removed slice User Application Slice.
Oct 09 09:50:55 compute-2 systemd[1270]: Reached target Shutdown.
Oct 09 09:50:55 compute-2 systemd[1270]: Finished Exit the Session.
Oct 09 09:50:55 compute-2 systemd[1270]: Reached target Exit the Session.
Oct 09 09:50:55 compute-2 systemd[1]: user@1000.service: Deactivated successfully.
Oct 09 09:50:55 compute-2 systemd[1]: Stopped User Manager for UID 1000.
Oct 09 09:50:55 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/1000...
Oct 09 09:50:55 compute-2 systemd[1]: run-user-1000.mount: Deactivated successfully.
Oct 09 09:50:55 compute-2 systemd[1]: user-runtime-dir@1000.service: Deactivated successfully.
Oct 09 09:50:55 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/1000.
Oct 09 09:50:55 compute-2 systemd[1]: Removed slice User Slice of UID 1000.
Oct 09 09:50:55 compute-2 systemd[1]: user-1000.slice: Consumed 8min 13.727s CPU time.
Oct 09 09:50:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:55.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:50:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:50:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:50:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:50:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:50:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:56.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:56 compute-2 ceph-mon[5983]: pgmap v521: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:50:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:57.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:50:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:50:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:50:58.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:50:58 compute-2 ceph-mon[5983]: pgmap v522: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:50:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:50:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:50:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:50:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:50:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:50:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:50:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:50:59.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:00.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:00 compute-2 ceph-mon[5983]: pgmap v523: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:51:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:51:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:51:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:51:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:51:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:01.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:51:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:02.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:51:02 compute-2 ceph-mon[5983]: pgmap v524: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:51:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:04.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:04 compute-2 podman[164376]: 2025-10-09 09:51:04.20032549 +0000 UTC m=+0.035375406 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3)
Oct 09 09:51:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:04.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:04 compute-2 ceph-mon[5983]: pgmap v525: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:51:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:51:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:51:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:51:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:51:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:51:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:51:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:06.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:51:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:06.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:06 compute-2 ceph-mon[5983]: pgmap v526: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:51:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:51:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:08.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:51:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:08.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:08 compute-2 ceph-mon[5983]: pgmap v527: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 09 09:51:09 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1379833168' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:51:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 09 09:51:09 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1379833168' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:51:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/2195742608' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:51:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/2195742608' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:51:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/2714317801' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:51:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/2714317801' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:51:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/1379833168' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:51:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/1379833168' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:51:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:10.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:51:10.269 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:51:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:51:10.270 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:51:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:51:10.270 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:51:10 compute-2 sudo[164399]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:51:10 compute-2 sudo[164399]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:51:10 compute-2 sudo[164399]: pam_unix(sudo:session): session closed for user root
Oct 09 09:51:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:51:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:10.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:51:10 compute-2 ceph-mon[5983]: pgmap v528: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:51:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:51:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:51:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:51:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:12.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:12.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:12 compute-2 ceph-mon[5983]: pgmap v529: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:51:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:14.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:14 compute-2 podman[164428]: 2025-10-09 09:51:14.210757825 +0000 UTC m=+0.037590483 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 09 09:51:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:51:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:14.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:51:14 compute-2 ceph-mon[5983]: pgmap v530: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:51:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:51:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:51:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:51:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:16.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:16 compute-2 podman[164447]: 2025-10-09 09:51:16.206262111 +0000 UTC m=+0.037808544 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 09 09:51:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:16.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:16 compute-2 ceph-mon[5983]: pgmap v531: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:51:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:18.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:18.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:19 compute-2 ceph-mon[5983]: pgmap v532: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:51:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:20.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:20.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:51:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:51:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:51:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:51:21 compute-2 ceph-mon[5983]: pgmap v533: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:22.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:22.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:23 compute-2 ceph-mon[5983]: pgmap v534: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:51:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:24.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:51:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:24.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:51:25 compute-2 ceph-mon[5983]: pgmap v535: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:51:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:51:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:51:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:51:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:26.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:26 compute-2 podman[164473]: 2025-10-09 09:51:26.234445603 +0000 UTC m=+0.064391037 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.build-date=20251001, config_id=ovn_controller)
Oct 09 09:51:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:26.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:27 compute-2 ceph-mon[5983]: pgmap v536: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:51:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:51:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:28.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:51:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:51:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:28.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:51:29 compute-2 ceph-mon[5983]: pgmap v537: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:30.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:30 compute-2 sudo[164501]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:51:30 compute-2 sudo[164501]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:51:30 compute-2 sudo[164501]: pam_unix(sudo:session): session closed for user root
Oct 09 09:51:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:30.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:51:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:51:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:51:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:51:31 compute-2 ceph-mon[5983]: pgmap v538: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #31. Immutable memtables: 0.
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:51:31.232303) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 31
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003491232329, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 761, "num_deletes": 250, "total_data_size": 1519760, "memory_usage": 1545464, "flush_reason": "Manual Compaction"}
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #32: started
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003491235044, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 32, "file_size": 675072, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17914, "largest_seqno": 18670, "table_properties": {"data_size": 671902, "index_size": 1014, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8252, "raw_average_key_size": 20, "raw_value_size": 665286, "raw_average_value_size": 1614, "num_data_blocks": 44, "num_entries": 412, "num_filter_entries": 412, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003438, "oldest_key_time": 1760003438, "file_creation_time": 1760003491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 2768 microseconds, and 1992 cpu microseconds.
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:51:31.235070) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #32: 675072 bytes OK
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:51:31.235082) [db/memtable_list.cc:519] [default] Level-0 commit table #32 started
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:51:31.235386) [db/memtable_list.cc:722] [default] Level-0 commit table #32: memtable #1 done
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:51:31.235396) EVENT_LOG_v1 {"time_micros": 1760003491235393, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:51:31.235405) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 1515731, prev total WAL file size 1515731, number of live WAL files 2.
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000028.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:51:31.236066) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [32(659KB)], [30(14MB)]
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003491236115, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [32], "files_L6": [30], "score": -1, "input_data_size": 15919153, "oldest_snapshot_seqno": -1}
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #33: 4914 keys, 12158789 bytes, temperature: kUnknown
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003491272192, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 33, "file_size": 12158789, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12125147, "index_size": 20284, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12293, "raw_key_size": 123516, "raw_average_key_size": 25, "raw_value_size": 12035058, "raw_average_value_size": 2449, "num_data_blocks": 847, "num_entries": 4914, "num_filter_entries": 4914, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760003491, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 33, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:51:31.272301) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12158789 bytes
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:51:31.272608) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 440.9 rd, 336.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 14.5 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(41.6) write-amplify(18.0) OK, records in: 5406, records dropped: 492 output_compression: NoCompression
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:51:31.272624) EVENT_LOG_v1 {"time_micros": 1760003491272615, "job": 16, "event": "compaction_finished", "compaction_time_micros": 36102, "compaction_time_cpu_micros": 19288, "output_level": 6, "num_output_files": 1, "total_output_size": 12158789, "num_input_records": 5406, "num_output_records": 4914, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003491272757, "job": 16, "event": "table_file_deletion", "file_number": 32}
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000030.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003491274649, "job": 16, "event": "table_file_deletion", "file_number": 30}
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:51:31.235987) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:51:31.274668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:51:31.274670) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:51:31.274672) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:51:31.274673) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:51:31 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:51:31.274674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:51:31 compute-2 rsyslogd[1245]: imjournal from <compute-2:ceph-mon>: begin to drop messages due to rate-limiting
Oct 09 09:51:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:51:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:32.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:51:32 compute-2 ceph-mon[5983]: pgmap v539: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:51:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:32.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:51:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:34.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:51:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:34 compute-2 ceph-mon[5983]: pgmap v540: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:51:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:51:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:34.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:51:35 compute-2 podman[164530]: 2025-10-09 09:51:35.204506074 +0000 UTC m=+0.038866451 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:51:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:51:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:51:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:51:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:51:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:36.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:36 compute-2 ceph-mon[5983]: pgmap v541: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:51:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:36.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:38.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:38 compute-2 ceph-mon[5983]: pgmap v542: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:38.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:40.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:40 compute-2 ceph-mon[5983]: pgmap v543: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:51:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:51:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:40.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:51:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:51:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:51:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:51:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:51:41 compute-2 sudo[164554]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:51:41 compute-2 sudo[164554]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:51:41 compute-2 sudo[164554]: pam_unix(sudo:session): session closed for user root
Oct 09 09:51:41 compute-2 sudo[164579]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:51:41 compute-2 sudo[164579]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:51:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:42.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:42 compute-2 sudo[164579]: pam_unix(sudo:session): session closed for user root
Oct 09 09:51:42 compute-2 nova_compute[163961]: 2025-10-09 09:51:42.738 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:42 compute-2 nova_compute[163961]: 2025-10-09 09:51:42.762 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:42 compute-2 ceph-mon[5983]: pgmap v544: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:51:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:42.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:44.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:44 compute-2 ceph-mon[5983]: pgmap v545: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:51:44 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:51:44 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:51:44 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:51:44 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:51:44 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:51:44 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:51:44 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:51:44 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:51:44 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:51:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:44.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:45 compute-2 podman[164637]: 2025-10-09 09:51:45.20536985 +0000 UTC m=+0.038405601 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:51:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:45 compute-2 ceph-mon[5983]: pgmap v546: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 09 09:51:45 compute-2 ceph-mon[5983]: pgmap v547: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:51:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:51:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:51:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:51:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:51:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:46.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.173 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.173 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.173 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.233 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.233 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.233 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.233 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.233 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.234 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.234 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.234 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.234 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.251 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.251 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.251 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.252 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.252 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:51:46 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:51:46 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2099201754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.593 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.809 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.810 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5342MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.810 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:51:46 compute-2 nova_compute[163961]: 2025-10-09 09:51:46.810 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:51:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:46 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3617190942' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:51:46 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2099201754' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:51:46 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3423074867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:51:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:51:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:46.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:51:47 compute-2 nova_compute[163961]: 2025-10-09 09:51:47.061 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 09:51:47 compute-2 nova_compute[163961]: 2025-10-09 09:51:47.061 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 09:51:47 compute-2 nova_compute[163961]: 2025-10-09 09:51:47.095 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:51:47 compute-2 podman[164678]: 2025-10-09 09:51:47.210411185 +0000 UTC m=+0.043044478 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 09 09:51:47 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:51:47 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3020292841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:51:47 compute-2 nova_compute[163961]: 2025-10-09 09:51:47.435 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:51:47 compute-2 nova_compute[163961]: 2025-10-09 09:51:47.439 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:51:47 compute-2 nova_compute[163961]: 2025-10-09 09:51:47.458 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:51:47 compute-2 nova_compute[163961]: 2025-10-09 09:51:47.459 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 09:51:47 compute-2 nova_compute[163961]: 2025-10-09 09:51:47.459 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:51:47 compute-2 sudo[164717]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:51:47 compute-2 sudo[164717]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:51:47 compute-2 sudo[164717]: pam_unix(sudo:session): session closed for user root
Oct 09 09:51:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:47 compute-2 ceph-mon[5983]: pgmap v548: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1 op/s
Oct 09 09:51:47 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3020292841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:51:47 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:51:47 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:51:47 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/337992853' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:51:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:48.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2037733150' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:51:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:48.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:49 compute-2 ceph-mon[5983]: pgmap v549: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1 op/s
Oct 09 09:51:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:51:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:51:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:51:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:51:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:51:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:50.047 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:50 compute-2 sudo[164745]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:51:50 compute-2 sudo[164745]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:51:50 compute-2 sudo[164745]: pam_unix(sudo:session): session closed for user root
Oct 09 09:51:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:50.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:51 compute-2 ceph-mon[5983]: pgmap v550: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 718 B/s rd, 0 op/s
Oct 09 09:51:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:52.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:52.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:53 compute-2 ceph-mon[5983]: pgmap v551: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 1 op/s
Oct 09 09:51:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:54.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Oct 09 09:51:54 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4160620108' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 09 09:51:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:51:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:54.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:51:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/1875037942' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 09 09:51:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/4160620108' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 09 09:51:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:51:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:51:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:51:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:51:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:51:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:55 compute-2 ceph-mon[5983]: pgmap v552: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:51:55 compute-2 ceph-mon[5983]: from='client.24673 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 09 09:51:55 compute-2 ceph-mon[5983]: from='client.14982 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 09 09:51:55 compute-2 ceph-mon[5983]: from='client.14982 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Oct 09 09:51:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:56.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:56.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:57 compute-2 podman[164776]: 2025-10-09 09:51:57.217623289 +0000 UTC m=+0.050437260 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:51:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:57 compute-2 ceph-mon[5983]: pgmap v553: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:51:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:51:58.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:51:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:51:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:51:58.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:51:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:51:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:51:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:51:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:51:59 compute-2 ceph-mon[5983]: pgmap v554: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:52:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:52:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:52:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:52:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:52:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:00.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:52:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:00.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:52:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:01 compute-2 ceph-mon[5983]: pgmap v555: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:52:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:02.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:02.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:03 compute-2 ceph-mon[5983]: pgmap v556: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:52:04 compute-2 rsyslogd[1245]: imjournal: 312 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct 09 09:52:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:04.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:04.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:52:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:52:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:52:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:52:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:52:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:05 compute-2 ceph-mon[5983]: pgmap v557: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:52:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:06.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:06 compute-2 podman[164808]: 2025-10-09 09:52:06.208293181 +0000 UTC m=+0.038682432 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible)
Oct 09 09:52:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:06.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:08 compute-2 ceph-mon[5983]: pgmap v558: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:52:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:08.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:08.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:52:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:52:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:52:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:52:10 compute-2 ceph-mon[5983]: pgmap v559: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:52:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:10.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:52:10.270 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:52:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:52:10.270 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:52:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:52:10.270 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:52:10 compute-2 sudo[164830]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:52:10 compute-2 sudo[164830]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:52:10 compute-2 sudo[164830]: pam_unix(sudo:session): session closed for user root
Oct 09 09:52:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:10.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:12 compute-2 ceph-mon[5983]: pgmap v560: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:52:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/1258684450' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:52:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/1258684450' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:52:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:12.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:12.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:14 compute-2 ceph-mon[5983]: pgmap v561: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:52:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:14.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:14.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:52:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:52:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:52:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:52:15 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/2909389839' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 09 09:52:15 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/247613377' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct 09 09:52:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:16 compute-2 ceph-mon[5983]: pgmap v562: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:52:16 compute-2 ceph-mon[5983]: from='client.24728 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 09 09:52:16 compute-2 ceph-mon[5983]: from='client.24731 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct 09 09:52:16 compute-2 ceph-mon[5983]: from='client.24731 -' entity='client.openstack' cmd=[{"prefix": "nfs cluster info", "cluster_id": "cephfs", "format": "json"}]: dispatch
Oct 09 09:52:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:16.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:16 compute-2 podman[164860]: 2025-10-09 09:52:16.202311678 +0000 UTC m=+0.035555095 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct 09 09:52:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:16.976 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:18 compute-2 ceph-mon[5983]: pgmap v563: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:52:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:18.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:18 compute-2 podman[164878]: 2025-10-09 09:52:18.201285802 +0000 UTC m=+0.037124426 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct 09 09:52:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:18.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:52:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:52:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:52:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:52:20 compute-2 ceph-mon[5983]: pgmap v564: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:52:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:52:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:20.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:20.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:22 compute-2 ceph-mon[5983]: pgmap v565: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:52:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:22.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:52:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:22.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:52:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:24 compute-2 ceph-mon[5983]: pgmap v566: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:52:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:24.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:24.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:52:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:52:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:52:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:52:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:26 compute-2 ceph-mon[5983]: pgmap v567: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:52:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:26.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:26.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:28 compute-2 ceph-mon[5983]: pgmap v568: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:52:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:28.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:28 compute-2 podman[164905]: 2025-10-09 09:52:28.21954069 +0000 UTC m=+0.052486363 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 09 09:52:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:28.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:52:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:52:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:52:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:52:30 compute-2 ceph-mon[5983]: pgmap v569: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:52:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:30.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:30 compute-2 sudo[164932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:52:30 compute-2 sudo[164932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:52:30 compute-2 sudo[164932]: pam_unix(sudo:session): session closed for user root
Oct 09 09:52:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:30.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:52:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:32.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:52:32 compute-2 ceph-mon[5983]: pgmap v570: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:52:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:32.997 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:34.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:34 compute-2 ceph-mon[5983]: pgmap v571: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:52:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:52:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:52:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:52:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:34.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:52:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:52:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:36.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:36 compute-2 ceph-mon[5983]: pgmap v572: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:52:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:37.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:37 compute-2 podman[164963]: 2025-10-09 09:52:37.210596 +0000 UTC m=+0.045902968 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:52:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:38.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:38 compute-2 ceph-mon[5983]: pgmap v573: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:52:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:39.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:52:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:52:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:52:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:52:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:40.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:40 compute-2 ceph-mon[5983]: pgmap v574: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:52:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:41.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:42.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:42 compute-2 ceph-mon[5983]: pgmap v575: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:52:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:43.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct 09 09:52:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:44.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct 09 09:52:44 compute-2 ceph-mon[5983]: pgmap v576: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:52:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:52:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:52:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:52:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:52:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:45.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:46.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:46 compute-2 ceph-mon[5983]: pgmap v577: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:52:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:47.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:47 compute-2 podman[164990]: 2025-10-09 09:52:47.20350789 +0000 UTC m=+0.039013433 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.453 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.454 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.467 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.468 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.468 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.475 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.476 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.476 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.476 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.476 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.476 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.477 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.489 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.489 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.489 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.489 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.489 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:52:47 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:52:47 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/216110203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:52:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:47 compute-2 nova_compute[163961]: 2025-10-09 09:52:47.826 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:52:47 compute-2 sudo[165027]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:52:47 compute-2 sudo[165027]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:52:47 compute-2 sudo[165027]: pam_unix(sudo:session): session closed for user root
Oct 09 09:52:47 compute-2 sudo[165054]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:52:47 compute-2 sudo[165054]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.011 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.011 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5376MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.012 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.012 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.057 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.057 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.071 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:52:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct 09 09:52:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:48.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct 09 09:52:48 compute-2 ceph-mon[5983]: pgmap v578: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:52:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/216110203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:52:48 compute-2 sudo[165054]: pam_unix(sudo:session): session closed for user root
Oct 09 09:52:48 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:52:48 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3056215677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.412 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.415 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.427 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.429 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 09:52:48 compute-2 nova_compute[163961]: 2025-10-09 09:52:48.429 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.417s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:52:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:49.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:49 compute-2 nova_compute[163961]: 2025-10-09 09:52:49.124 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:52:49 compute-2 nova_compute[163961]: 2025-10-09 09:52:49.124 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:52:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:52:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:52:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:52:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:52:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:52:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:52:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:52:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3056215677' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:52:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/942557203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:52:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2467923929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:52:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3099014491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:52:49 compute-2 podman[165131]: 2025-10-09 09:52:49.204029598 +0000 UTC m=+0.038895831 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Oct 09 09:52:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:52:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:52:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:52:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:52:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:50.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:50 compute-2 ceph-mon[5983]: pgmap v579: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 09:52:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3760170036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:52:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:52:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:50 compute-2 sudo[165150]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:52:50 compute-2 sudo[165150]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:52:50 compute-2 sudo[165150]: pam_unix(sudo:session): session closed for user root
Oct 09 09:52:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct 09 09:52:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:51.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct 09 09:52:51 compute-2 sudo[165176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:52:51 compute-2 sudo[165176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:52:51 compute-2 sudo[165176]: pam_unix(sudo:session): session closed for user root
Oct 09 09:52:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct 09 09:52:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:52.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct 09 09:52:52 compute-2 ceph-mon[5983]: pgmap v580: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 09:52:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:52:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:52:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:52:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:53.023 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:52:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:54.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:54 compute-2 ceph-mon[5983]: pgmap v581: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:52:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:52:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:52:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:52:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:52:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:55.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:52:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:56.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:52:56 compute-2 ceph-mon[5983]: pgmap v582: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 09:52:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct 09 09:52:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:57.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct 09 09:52:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct 09 09:52:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:52:58.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct 09 09:52:58 compute-2 ceph-mon[5983]: pgmap v583: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:52:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:52:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct 09 09:52:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:52:59.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct 09 09:52:59 compute-2 podman[165208]: 2025-10-09 09:52:59.218875526 +0000 UTC m=+0.051291228 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:52:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:52:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:52:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:52:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:52:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:53:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:53:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:52:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:53:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:53:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:53:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:00.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:53:00 compute-2 ceph-mon[5983]: pgmap v584: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 09:53:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:01.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:01 compute-2 ceph-mon[5983]: pgmap v585: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:02 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:53:02.047 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:53:02 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:53:02.048 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 09:53:02 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:53:02.049 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:53:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct 09 09:53:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:02.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct 09 09:53:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:03.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:03 compute-2 ceph-mon[5983]: pgmap v586: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:53:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:53:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:04.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:53:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:53:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:53:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:53:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:53:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:05.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:05 compute-2 ceph-mon[5983]: pgmap v587: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:53:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:06.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:07.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:07 compute-2 ceph-mon[5983]: pgmap v588: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:53:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct 09 09:53:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:08.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct 09 09:53:08 compute-2 podman[165243]: 2025-10-09 09:53:08.209488618 +0000 UTC m=+0.040753209 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:53:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:53:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:09.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:53:09 compute-2 ceph-mon[5983]: pgmap v589: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:53:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:53:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:53:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:53:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:10.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:53:10.271 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:53:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:53:10.271 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:53:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:53:10.272 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:53:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:11 compute-2 sudo[165264]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:53:11 compute-2 sudo[165264]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:53:11 compute-2 sudo[165264]: pam_unix(sudo:session): session closed for user root
Oct 09 09:53:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:11.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:11 compute-2 ceph-mon[5983]: pgmap v590: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 09 09:53:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/208649628' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:53:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 09 09:53:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/208649628' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:53:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:12.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/208649628' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:53:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/208649628' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:53:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:13.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:13 compute-2 ceph-mon[5983]: pgmap v591: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:53:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct 09 09:53:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:14.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct 09 09:53:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:53:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:53:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:53:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:53:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:15.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:15 compute-2 ceph-mon[5983]: pgmap v592: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:16.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #34. Immutable memtables: 0.
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.262049) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 34
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596262073, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 1318, "num_deletes": 256, "total_data_size": 3199294, "memory_usage": 3247336, "flush_reason": "Manual Compaction"}
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #35: started
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596267520, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 35, "file_size": 2067926, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18676, "largest_seqno": 19988, "table_properties": {"data_size": 2062313, "index_size": 2944, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1605, "raw_key_size": 11873, "raw_average_key_size": 18, "raw_value_size": 2050859, "raw_average_value_size": 3270, "num_data_blocks": 132, "num_entries": 627, "num_filter_entries": 627, "num_deletions": 256, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003491, "oldest_key_time": 1760003491, "file_creation_time": 1760003596, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 5495 microseconds, and 3668 cpu microseconds.
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.267545) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #35: 2067926 bytes OK
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.267555) [db/memtable_list.cc:519] [default] Level-0 commit table #35 started
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268151) [db/memtable_list.cc:722] [default] Level-0 commit table #35: memtable #1 done
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268162) EVENT_LOG_v1 {"time_micros": 1760003596268159, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268172) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3193004, prev total WAL file size 3193004, number of live WAL files 2.
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000031.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268659) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323533' seq:0, type:0; will stop at (end)
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [35(2019KB)], [33(11MB)]
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596268680, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [35], "files_L6": [33], "score": -1, "input_data_size": 14226715, "oldest_snapshot_seqno": -1}
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #36: 5015 keys, 13754866 bytes, temperature: kUnknown
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596306053, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 36, "file_size": 13754866, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13719680, "index_size": 21572, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12549, "raw_key_size": 126900, "raw_average_key_size": 25, "raw_value_size": 13626768, "raw_average_value_size": 2717, "num_data_blocks": 890, "num_entries": 5015, "num_filter_entries": 5015, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760003596, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.306325) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 13754866 bytes
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.306800) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 378.9 rd, 366.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 11.6 +0.0 blob) out(13.1 +0.0 blob), read-write-amplify(13.5) write-amplify(6.7) OK, records in: 5541, records dropped: 526 output_compression: NoCompression
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.306814) EVENT_LOG_v1 {"time_micros": 1760003596306808, "job": 18, "event": "compaction_finished", "compaction_time_micros": 37546, "compaction_time_cpu_micros": 19238, "output_level": 6, "num_output_files": 1, "total_output_size": 13754866, "num_input_records": 5541, "num_output_records": 5015, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596307535, "job": 18, "event": "table_file_deletion", "file_number": 35}
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000033.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003596309248, "job": 18, "event": "table_file_deletion", "file_number": 33}
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.268621) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309341) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309344) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309346) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:53:16 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:53:16.309347) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:53:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:17.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:17 compute-2 ceph-mon[5983]: pgmap v593: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:53:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:18.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:18 compute-2 podman[165296]: 2025-10-09 09:53:18.205361016 +0000 UTC m=+0.038153522 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Oct 09 09:53:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:19.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:19 compute-2 ceph-mon[5983]: pgmap v594: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:53:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:53:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:53:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:53:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:53:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct 09 09:53:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:20.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct 09 09:53:20 compute-2 podman[165314]: 2025-10-09 09:53:20.203986044 +0000 UTC m=+0.036665442 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:53:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:53:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:21.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:21 compute-2 ceph-mon[5983]: pgmap v595: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:53:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:22.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:23.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:23 compute-2 ceph-mon[5983]: pgmap v596: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:53:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:53:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:24.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:53:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:53:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:53:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:53:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:53:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:25.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:25 compute-2 ceph-mon[5983]: pgmap v597: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 09:53:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:26.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:27.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:27 compute-2 ceph-mon[5983]: pgmap v598: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:53:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:53:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:28.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:53:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:29.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:29 compute-2 ceph-mon[5983]: pgmap v599: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:53:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:53:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:53:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:53:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:30.151 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:30 compute-2 podman[165341]: 2025-10-09 09:53:30.219365123 +0000 UTC m=+0.055439120 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Oct 09 09:53:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:31 compute-2 sudo[165366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:53:31 compute-2 sudo[165366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:53:31 compute-2 sudo[165366]: pam_unix(sudo:session): session closed for user root
Oct 09 09:53:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:31.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:31 compute-2 ceph-mon[5983]: pgmap v600: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:32.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:33.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:33 compute-2 ceph-mon[5983]: pgmap v601: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:53:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:53:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:53:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:53:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:53:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct 09 09:53:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:34.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct 09 09:53:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:53:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:35.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:53:35 compute-2 ceph-mon[5983]: pgmap v602: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:53:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct 09 09:53:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:36.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct 09 09:53:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:37.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:37 compute-2 ceph-mon[5983]: pgmap v603: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:53:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:53:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:53:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:53:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:53:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:38.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:39.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:39 compute-2 podman[165399]: 2025-10-09 09:53:39.209896951 +0000 UTC m=+0.045572106 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct 09 09:53:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:39 compute-2 ceph-mon[5983]: pgmap v604: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:40.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:41.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:41 compute-2 ceph-mon[5983]: pgmap v605: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct 09 09:53:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:42.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct 09 09:53:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:53:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:53:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:53:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:53:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct 09 09:53:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:43.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct 09 09:53:43 compute-2 ceph-mon[5983]: pgmap v606: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:53:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:44.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:45.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:45 compute-2 ceph-mon[5983]: pgmap v607: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:53:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:46.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:53:46 compute-2 nova_compute[163961]: 2025-10-09 09:53:46.175 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:53:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:53:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:53:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:53:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:53:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:47.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:47 compute-2 nova_compute[163961]: 2025-10-09 09:53:47.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:53:47 compute-2 ceph-mon[5983]: pgmap v608: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:53:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:53:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:48.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:53:48 compute-2 nova_compute[163961]: 2025-10-09 09:53:48.168 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:53:48 compute-2 nova_compute[163961]: 2025-10-09 09:53:48.170 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:53:48 compute-2 nova_compute[163961]: 2025-10-09 09:53:48.171 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 09:53:48 compute-2 nova_compute[163961]: 2025-10-09 09:53:48.171 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 09:53:48 compute-2 nova_compute[163961]: 2025-10-09 09:53:48.180 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 09:53:48 compute-2 nova_compute[163961]: 2025-10-09 09:53:48.180 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:53:48 compute-2 nova_compute[163961]: 2025-10-09 09:53:48.180 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:53:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1414105849' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:53:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:49.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.192 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.193 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.193 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.193 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.193 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:53:49 compute-2 podman[165426]: 2025-10-09 09:53:49.223091843 +0000 UTC m=+0.046036042 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct 09 09:53:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.548 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.355s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:53:49 compute-2 ceph-mon[5983]: pgmap v609: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/4016438260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:53:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1280325955' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:53:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/325359725' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:53:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3354045688' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:53:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.727 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.728 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5376MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.729 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.729 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.774 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.775 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 09:53:49 compute-2 nova_compute[163961]: 2025-10-09 09:53:49.790 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:53:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:53:50 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1008308126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:53:50 compute-2 nova_compute[163961]: 2025-10-09 09:53:50.127 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:53:50 compute-2 nova_compute[163961]: 2025-10-09 09:53:50.130 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:53:50 compute-2 nova_compute[163961]: 2025-10-09 09:53:50.148 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:53:50 compute-2 nova_compute[163961]: 2025-10-09 09:53:50.149 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 09:53:50 compute-2 nova_compute[163961]: 2025-10-09 09:53:50.150 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:53:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:50.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1008308126' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:53:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:51.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:51 compute-2 sudo[165488]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:53:51 compute-2 sudo[165488]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:53:51 compute-2 sudo[165488]: pam_unix(sudo:session): session closed for user root
Oct 09 09:53:51 compute-2 nova_compute[163961]: 2025-10-09 09:53:51.150 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:53:51 compute-2 podman[165512]: 2025-10-09 09:53:51.177490037 +0000 UTC m=+0.039414257 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 09 09:53:51 compute-2 ceph-mon[5983]: pgmap v610: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:53:51 compute-2 sudo[165531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:53:51 compute-2 sudo[165531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:53:51 compute-2 sudo[165531]: pam_unix(sudo:session): session closed for user root
Oct 09 09:53:51 compute-2 sudo[165556]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:53:51 compute-2 sudo[165556]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:53:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:53:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:53:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:53:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:53:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:53:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:52.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:53:52 compute-2 sudo[165556]: pam_unix(sudo:session): session closed for user root
Oct 09 09:53:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:53:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:53:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:53:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:53:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:53:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:53:52 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:53:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:53.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:53 compute-2 ceph-mon[5983]: pgmap v611: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:53:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:54.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:55.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:55 compute-2 ceph-mon[5983]: pgmap v612: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 09:53:55 compute-2 sudo[165613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:53:55 compute-2 sudo[165613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:53:55 compute-2 sudo[165613]: pam_unix(sudo:session): session closed for user root
Oct 09 09:53:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:53:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:56.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:53:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:53:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:53:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:53:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:53:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:53:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:53:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:53:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:57.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:57 compute-2 ceph-mon[5983]: pgmap v613: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:53:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:53:58.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:53:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:53:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:53:59.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:53:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:53:59 compute-2 ceph-mon[5983]: pgmap v614: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 09:53:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:53:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:53:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:53:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:54:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:00.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:54:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:01.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:01 compute-2 podman[165643]: 2025-10-09 09:54:01.218424397 +0000 UTC m=+0.052992125 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 09 09:54:01 compute-2 ceph-mon[5983]: pgmap v615: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 09:54:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:54:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:54:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:54:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:54:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:02.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:54:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:03.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:54:03 compute-2 ceph-mon[5983]: pgmap v616: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:54:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:04.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:54:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:54:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:05.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:54:05 compute-2 ceph-mon[5983]: pgmap v617: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:54:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:06.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:54:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:54:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:54:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:54:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:07.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:07 compute-2 ceph-mon[5983]: pgmap v618: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:54:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:08.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:09.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:09 compute-2 ceph-mon[5983]: pgmap v619: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #37. Immutable memtables: 0.
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.714162) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 37
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649714239, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 792, "num_deletes": 251, "total_data_size": 1569754, "memory_usage": 1595368, "flush_reason": "Manual Compaction"}
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #38: started
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649719000, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 38, "file_size": 1032670, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19993, "largest_seqno": 20780, "table_properties": {"data_size": 1028915, "index_size": 1535, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8578, "raw_average_key_size": 19, "raw_value_size": 1021376, "raw_average_value_size": 2316, "num_data_blocks": 68, "num_entries": 441, "num_filter_entries": 441, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003597, "oldest_key_time": 1760003597, "file_creation_time": 1760003649, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 4882 microseconds, and 3829 cpu microseconds.
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.719047) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #38: 1032670 bytes OK
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.719068) [db/memtable_list.cc:519] [default] Level-0 commit table #38 started
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.719467) [db/memtable_list.cc:722] [default] Level-0 commit table #38: memtable #1 done
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.719480) EVENT_LOG_v1 {"time_micros": 1760003649719477, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.719496) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 1565615, prev total WAL file size 1565615, number of live WAL files 2.
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000034.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.719925) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [38(1008KB)], [36(13MB)]
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649719963, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [38], "files_L6": [36], "score": -1, "input_data_size": 14787536, "oldest_snapshot_seqno": -1}
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #39: 4940 keys, 12621288 bytes, temperature: kUnknown
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649752994, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 39, "file_size": 12621288, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12587595, "index_size": 20271, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12357, "raw_key_size": 125966, "raw_average_key_size": 25, "raw_value_size": 12496861, "raw_average_value_size": 2529, "num_data_blocks": 833, "num_entries": 4940, "num_filter_entries": 4940, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760003649, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 39, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.753208) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 12621288 bytes
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.753752) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 446.6 rd, 381.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 13.1 +0.0 blob) out(12.0 +0.0 blob), read-write-amplify(26.5) write-amplify(12.2) OK, records in: 5456, records dropped: 516 output_compression: NoCompression
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.753767) EVENT_LOG_v1 {"time_micros": 1760003649753760, "job": 20, "event": "compaction_finished", "compaction_time_micros": 33115, "compaction_time_cpu_micros": 17611, "output_level": 6, "num_output_files": 1, "total_output_size": 12621288, "num_input_records": 5456, "num_output_records": 4940, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649754111, "job": 20, "event": "table_file_deletion", "file_number": 38}
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000036.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003649756050, "job": 20, "event": "table_file_deletion", "file_number": 36}
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.719890) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.756105) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.756109) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.756110) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.756112) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:54:09 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:54:09.756113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:54:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:10.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:10 compute-2 podman[165674]: 2025-10-09 09:54:10.208285222 +0000 UTC m=+0.042716052 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 09 09:54:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:54:10.273 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:54:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:54:10.273 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:54:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:54:10.273 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:54:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:11.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:11 compute-2 sudo[165692]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:54:11 compute-2 sudo[165692]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:54:11 compute-2 sudo[165692]: pam_unix(sudo:session): session closed for user root
Oct 09 09:54:11 compute-2 ceph-mon[5983]: pgmap v620: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:54:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:54:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:54:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:54:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:54:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:12.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:13.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:13 compute-2 ceph-mon[5983]: pgmap v621: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:54:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:14.185 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:15.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:15 compute-2 ceph-mon[5983]: pgmap v622: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:54:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:16.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:54:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:54:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:54:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:54:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:17.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:17 compute-2 ceph-mon[5983]: pgmap v623: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:54:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:18.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:19.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:19 compute-2 ceph-mon[5983]: pgmap v624: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:54:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:54:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:20.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:20 compute-2 podman[165726]: 2025-10-09 09:54:20.202327297 +0000 UTC m=+0.038263888 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 09 09:54:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:21.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:21 compute-2 ceph-mon[5983]: pgmap v625: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:54:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:54:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:54:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:54:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:54:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:22.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:22 compute-2 podman[165744]: 2025-10-09 09:54:22.210360778 +0000 UTC m=+0.041946932 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 09 09:54:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:54:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:23.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:54:23 compute-2 ceph-mon[5983]: pgmap v626: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:54:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:24.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:25.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:25 compute-2 ceph-mon[5983]: pgmap v627: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:54:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:54:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:26.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:54:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:54:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:54:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:54:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:54:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:54:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:27.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:54:27 compute-2 ceph-mon[5983]: pgmap v628: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:54:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:28.194 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=404 latency=0.001000010s ======
Oct 09 09:54:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:28.653 +0000] "GET /healthcheck HTTP/1.1" 404 242 - "python-urllib3/1.26.5" - latency=0.001000010s
Oct 09 09:54:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:29.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:29 compute-2 PackageKit[97844]: daemon quit
Oct 09 09:54:29 compute-2 systemd[1]: packagekit.service: Deactivated successfully.
Oct 09 09:54:29 compute-2 ceph-mon[5983]: pgmap v629: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:54:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:54:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:30.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:54:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:54:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:31.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:54:31 compute-2 sudo[165771]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:54:31 compute-2 sudo[165771]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:54:31 compute-2 sudo[165771]: pam_unix(sudo:session): session closed for user root
Oct 09 09:54:31 compute-2 podman[165795]: 2025-10-09 09:54:31.304086657 +0000 UTC m=+0.051041989 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 09 09:54:31 compute-2 ceph-mon[5983]: pgmap v630: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:54:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:54:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:54:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:54:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:54:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:54:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:32.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:54:32 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e134 e134: 3 total, 3 up, 3 in
Oct 09 09:54:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:54:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:33.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:54:33 compute-2 ceph-mon[5983]: pgmap v631: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:54:33 compute-2 ceph-mon[5983]: osdmap e134: 3 total, 3 up, 3 in
Oct 09 09:54:33 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e135 e135: 3 total, 3 up, 3 in
Oct 09 09:54:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:34.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:34 compute-2 ceph-mon[5983]: osdmap e135: 3 total, 3 up, 3 in
Oct 09 09:54:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:54:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e136 e136: 3 total, 3 up, 3 in
Oct 09 09:54:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:35.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:35 compute-2 ceph-mon[5983]: pgmap v634: 337 pgs: 337 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Oct 09 09:54:35 compute-2 ceph-mon[5983]: osdmap e136: 3 total, 3 up, 3 in
Oct 09 09:54:35 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e137 e137: 3 total, 3 up, 3 in
Oct 09 09:54:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:36.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:36 compute-2 ceph-mon[5983]: osdmap e137: 3 total, 3 up, 3 in
Oct 09 09:54:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:54:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:54:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:54:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:54:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:54:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:37.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:54:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:37 compute-2 ceph-mon[5983]: pgmap v637: 337 pgs: 337 active+clean; 21 MiB data, 174 MiB used, 60 GiB / 60 GiB avail; 47 KiB/s rd, 5.1 MiB/s wr, 68 op/s
Oct 09 09:54:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:38.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:39.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:39 compute-2 ceph-mon[5983]: pgmap v638: 337 pgs: 337 active+clean; 21 MiB data, 174 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 3.7 MiB/s wr, 50 op/s
Oct 09 09:54:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:54:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:40.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:54:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:41.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:41 compute-2 podman[165829]: 2025-10-09 09:54:41.207925816 +0000 UTC m=+0.042561641 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct 09 09:54:41 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e138 e138: 3 total, 3 up, 3 in
Oct 09 09:54:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:41 compute-2 ceph-mon[5983]: pgmap v639: 337 pgs: 337 active+clean; 21 MiB data, 174 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 3.2 MiB/s wr, 42 op/s
Oct 09 09:54:41 compute-2 ceph-mon[5983]: osdmap e138: 3 total, 3 up, 3 in
Oct 09 09:54:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:54:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:54:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:54:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:54:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:54:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:42.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:54:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:43.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:43 compute-2 ceph-mon[5983]: pgmap v641: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 5.5 MiB/s wr, 52 op/s
Oct 09 09:54:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:44.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:54:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:45.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:54:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:45 compute-2 ceph-mon[5983]: pgmap v642: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 2.4 MiB/s wr, 14 op/s
Oct 09 09:54:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 09:54:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:46.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 09:54:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:54:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:54:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:54:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:54:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:47.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:47 compute-2 ceph-mon[5983]: pgmap v643: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 2.0 MiB/s wr, 12 op/s
Oct 09 09:54:48 compute-2 nova_compute[163961]: 2025-10-09 09:54:48.168 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:54:48 compute-2 nova_compute[163961]: 2025-10-09 09:54:48.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:54:48 compute-2 nova_compute[163961]: 2025-10-09 09:54:48.171 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 09:54:48 compute-2 nova_compute[163961]: 2025-10-09 09:54:48.171 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 09:54:48 compute-2 nova_compute[163961]: 2025-10-09 09:54:48.190 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 09:54:48 compute-2 nova_compute[163961]: 2025-10-09 09:54:48.190 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:54:48 compute-2 nova_compute[163961]: 2025-10-09 09:54:48.190 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:54:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:48.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/4012396778' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.173 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.173 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:54:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:54:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:49.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.196 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.196 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.196 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.196 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.197 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:54:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:54:49 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3185983185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.542 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.345s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.742 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.743 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5372MB free_disk=59.98828125GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.743 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.744 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.804 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.805 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 09:54:49 compute-2 nova_compute[163961]: 2025-10-09 09:54:49.826 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:54:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:49 compute-2 ceph-mon[5983]: pgmap v644: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 2.0 MiB/s wr, 12 op/s
Oct 09 09:54:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1882083905' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:54:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3874590807' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:54:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3185983185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:54:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:54:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:54:50 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4109511036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:54:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:54:50 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1194021554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:54:50 compute-2 nova_compute[163961]: 2025-10-09 09:54:50.163 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:54:50 compute-2 nova_compute[163961]: 2025-10-09 09:54:50.167 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:54:50 compute-2 nova_compute[163961]: 2025-10-09 09:54:50.187 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:54:50 compute-2 nova_compute[163961]: 2025-10-09 09:54:50.189 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 09:54:50 compute-2 nova_compute[163961]: 2025-10-09 09:54:50.189 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:54:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:50.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/4109511036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:54:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1194021554' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:54:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:51.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:51 compute-2 nova_compute[163961]: 2025-10-09 09:54:51.185 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:54:51 compute-2 podman[165900]: 2025-10-09 09:54:51.196463239 +0000 UTC m=+0.033071176 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:54:51 compute-2 nova_compute[163961]: 2025-10-09 09:54:51.199 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:54:51 compute-2 nova_compute[163961]: 2025-10-09 09:54:51.199 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:54:51 compute-2 sudo[165916]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:54:51 compute-2 sudo[165916]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:54:51 compute-2 sudo[165916]: pam_unix(sudo:session): session closed for user root
Oct 09 09:54:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:51 compute-2 ceph-mon[5983]: pgmap v645: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 2.0 MiB/s wr, 12 op/s
Oct 09 09:54:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:54:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:54:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:54:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:54:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:54:52.051 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:54:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:54:52.052 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 09:54:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:54:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:52.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:54:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:53.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:53 compute-2 podman[165943]: 2025-10-09 09:54:53.214358297 +0000 UTC m=+0.041172840 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 09 09:54:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:53 compute-2 ceph-mon[5983]: pgmap v646: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:54:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:54.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:55.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:55 compute-2 sudo[165963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:54:55 compute-2 sudo[165963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:54:55 compute-2 sudo[165963]: pam_unix(sudo:session): session closed for user root
Oct 09 09:54:55 compute-2 sudo[165988]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:54:55 compute-2 sudo[165988]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:54:55 compute-2 ceph-mon[5983]: pgmap v647: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:54:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct 09 09:54:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:56.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct 09 09:54:56 compute-2 sudo[165988]: pam_unix(sudo:session): session closed for user root
Oct 09 09:54:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:54:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:54:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:54:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:54:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:54:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:57.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:54:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:54:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:54:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:54:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:54:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:54:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:54:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:54:57 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:54:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:54:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:54:58.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:54:58 compute-2 ceph-mon[5983]: pgmap v648: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:54:58 compute-2 ceph-mon[5983]: pgmap v649: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:54:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:59 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:54:59.054 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:54:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:54:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:54:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:54:59.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:54:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:54:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:54:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:54:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:54:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:55:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:00.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:55:00 compute-2 ceph-mon[5983]: pgmap v650: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:55:00 compute-2 sudo[166048]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:55:00 compute-2 sudo[166048]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:55:00 compute-2 sudo[166048]: pam_unix(sudo:session): session closed for user root
Oct 09 09:55:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:01.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:01 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:55:01 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:55:01 compute-2 ceph-mon[5983]: pgmap v651: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 09:55:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:55:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:55:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:55:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:55:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct 09 09:55:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:02.223 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct 09 09:55:02 compute-2 podman[166074]: 2025-10-09 09:55:02.272042746 +0000 UTC m=+0.100173536 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct 09 09:55:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:03.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:03 compute-2 ceph-mon[5983]: pgmap v652: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:55:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:55:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:04.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:55:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:55:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:05.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:05 compute-2 ceph-mon[5983]: pgmap v653: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:55:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:55:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:06.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:55:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:55:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:55:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:55:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:55:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:07.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:07 compute-2 ceph-mon[5983]: pgmap v654: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 09:55:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:55:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:08.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:55:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:09.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:09 compute-2 ceph-mon[5983]: pgmap v655: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:55:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:55:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:10.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:55:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:55:10.274 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:55:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:55:10.274 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:55:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:55:10.275 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:55:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:11.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:11 compute-2 sudo[166106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:55:11 compute-2 sudo[166106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:55:11 compute-2 sudo[166106]: pam_unix(sudo:session): session closed for user root
Oct 09 09:55:11 compute-2 podman[166130]: 2025-10-09 09:55:11.484547862 +0000 UTC m=+0.067425970 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 09 09:55:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:11 compute-2 ceph-mon[5983]: pgmap v656: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:55:11 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/3577999995' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:55:11 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/3577999995' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:55:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:55:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:55:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:55:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:55:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:12.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1002258365' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:13.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:13 compute-2 ceph-mon[5983]: pgmap v657: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:55:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:55:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:14.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:55:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 09:55:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                          ** DB Stats **
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Cumulative writes: 3970 writes, 21K keys, 3970 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.05 MB/s
                                          Cumulative WAL: 3970 writes, 3970 syncs, 1.00 writes per sync, written: 0.05 GB, 0.05 MB/s
                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                          Interval writes: 1495 writes, 7150 keys, 1495 commit groups, 1.0 writes per commit group, ingest: 16.84 MB, 0.03 MB/s
                                          Interval WAL: 1495 writes, 1495 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                          
                                          ** Compaction Stats [default] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    380.3      0.09              0.06        10    0.009       0      0       0.0       0.0
                                            L6      1/0   12.04 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.5    387.6    328.9      0.35              0.18         9    0.039     42K   4799       0.0       0.0
                                           Sum      1/0   12.04 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.5    311.3    339.0      0.43              0.24        19    0.023     42K   4799       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   5.5    292.8    297.6      0.21              0.10         8    0.026     22K   2557       0.0       0.0
                                          
                                          ** Compaction Stats [default] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    387.6    328.9      0.35              0.18         9    0.039     42K   4799       0.0       0.0
                                          High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    384.3      0.08              0.06         9    0.009       0      0       0.0       0.0
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1200.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.032, interval 0.011
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.14 GB write, 0.12 MB/s write, 0.13 GB read, 0.11 MB/s read, 0.4 seconds
                                          Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.2 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x5647939f1350#2 capacity: 304.00 MB usage: 8.05 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 6.5e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(477,7.69 MB,2.53081%) FilterBlock(19,127.80 KB,0.0410532%) IndexBlock(19,240.41 KB,0.0772275%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [default] **
Oct 09 09:55:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e138 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e139 e139: 3 total, 3 up, 3 in
Oct 09 09:55:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:15.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:15 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e140 e140: 3 total, 3 up, 3 in
Oct 09 09:55:15 compute-2 ceph-mon[5983]: pgmap v658: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:55:15 compute-2 ceph-mon[5983]: osdmap e139: 3 total, 3 up, 3 in
Oct 09 09:55:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:55:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:55:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:55:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:55:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:16.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:16 compute-2 ceph-mon[5983]: osdmap e140: 3 total, 3 up, 3 in
Oct 09 09:55:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:17.215 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:17 compute-2 ceph-mon[5983]: pgmap v661: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 127 B/s wr, 11 op/s
Oct 09 09:55:17 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1547011463' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:55:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:55:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:18.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:55:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:18 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/980579080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:55:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:19.218 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:19 compute-2 ceph-mon[5983]: pgmap v662: 337 pgs: 337 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 127 B/s wr, 10 op/s
Oct 09 09:55:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:55:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:55:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:20.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:55:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:55:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:55:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:55:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:55:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:21.221 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:21 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 e141: 3 total, 3 up, 3 in
Oct 09 09:55:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:21 compute-2 ceph-mon[5983]: pgmap v663: 337 pgs: 337 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 64 op/s
Oct 09 09:55:21 compute-2 ceph-mon[5983]: osdmap e141: 3 total, 3 up, 3 in
Oct 09 09:55:22 compute-2 podman[166160]: 2025-10-09 09:55:22.206321594 +0000 UTC m=+0.038777536 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct 09 09:55:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000008s ======
Oct 09 09:55:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:22.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000008s
Oct 09 09:55:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:23.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:23 compute-2 ceph-mon[5983]: pgmap v665: 337 pgs: 337 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 65 op/s
Oct 09 09:55:24 compute-2 podman[166179]: 2025-10-09 09:55:24.210909241 +0000 UTC m=+0.038466782 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct 09 09:55:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:55:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:24.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:55:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:25.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:25 compute-2 ceph-mon[5983]: pgmap v666: 337 pgs: 337 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 2.4 MiB/s wr, 47 op/s
Oct 09 09:55:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:55:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:55:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:55:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:55:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:26.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:27.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:27 compute-2 ceph-mon[5983]: pgmap v667: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Oct 09 09:55:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:28.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:29.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:30 compute-2 ceph-mon[5983]: pgmap v668: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 122 op/s
Oct 09 09:55:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:55:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:30.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:55:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:55:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:55:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:55:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:55:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:31.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:31 compute-2 sudo[166203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:55:31 compute-2 sudo[166203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:55:31 compute-2 sudo[166203]: pam_unix(sudo:session): session closed for user root
Oct 09 09:55:31 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 09 09:55:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:32 compute-2 ceph-mon[5983]: pgmap v669: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 80 op/s
Oct 09 09:55:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:32.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:33 compute-2 podman[166230]: 2025-10-09 09:55:33.222273934 +0000 UTC m=+0.054393566 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 09:55:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:33.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:34 compute-2 ceph-mon[5983]: pgmap v670: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 69 op/s
Oct 09 09:55:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:34.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:55:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:35.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:55:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:55:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:55:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:55:36 compute-2 ceph-mon[5983]: pgmap v671: 337 pgs: 337 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 66 op/s
Oct 09 09:55:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:36.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:37.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:38 compute-2 ceph-mon[5983]: pgmap v672: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 129 op/s
Oct 09 09:55:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:55:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:38.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:55:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:39.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:40 compute-2 ceph-mon[5983]: pgmap v673: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 09 09:55:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:40.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:55:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:55:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:55:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:55:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:41.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:42 compute-2 ceph-mon[5983]: pgmap v674: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 09 09:55:42 compute-2 podman[166262]: 2025-10-09 09:55:42.206292527 +0000 UTC m=+0.037925311 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 09:55:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:42.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:55:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:43.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:55:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:44 compute-2 ceph-mon[5983]: pgmap v675: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 09 09:55:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:55:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:44.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:55:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:45.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:55:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:55:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:55:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:55:46 compute-2 ceph-mon[5983]: pgmap v676: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 270 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 09 09:55:46 compute-2 nova_compute[163961]: 2025-10-09 09:55:46.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:46 compute-2 nova_compute[163961]: 2025-10-09 09:55:46.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 09 09:55:46 compute-2 nova_compute[163961]: 2025-10-09 09:55:46.181 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 09 09:55:46 compute-2 nova_compute[163961]: 2025-10-09 09:55:46.181 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:46 compute-2 nova_compute[163961]: 2025-10-09 09:55:46.182 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 09 09:55:46 compute-2 nova_compute[163961]: 2025-10-09 09:55:46.187 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:46.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:47.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:48 compute-2 ceph-mon[5983]: pgmap v677: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 276 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 09 09:55:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:55:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:48.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:55:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.193 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.193 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.193 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.193 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.193 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.209 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.209 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.209 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.210 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.210 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:55:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:49.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:55:49 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1559129444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.546 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.736 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.737 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5372MB free_disk=59.94271469116211GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.737 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.737 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.841 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.841 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 09:55:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.890 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing inventories for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.915 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating ProviderTree inventory for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.915 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating inventory in ProviderTree for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.959 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing aggregate associations for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.975 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing trait associations for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8, traits: HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,HW_CPU_X86_AVX512VAES,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 09 09:55:49 compute-2 nova_compute[163961]: 2025-10-09 09:55:49.985 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:55:50 compute-2 ceph-mon[5983]: pgmap v678: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 6.3 KiB/s rd, 16 KiB/s wr, 1 op/s
Oct 09 09:55:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2405746203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1559129444' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:55:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/292366459' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:55:50 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2594520052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:55:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:50.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:55:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:55:50 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/576738267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:50 compute-2 nova_compute[163961]: 2025-10-09 09:55:50.322 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:55:50 compute-2 nova_compute[163961]: 2025-10-09 09:55:50.325 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:55:50 compute-2 nova_compute[163961]: 2025-10-09 09:55:50.338 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:55:50 compute-2 nova_compute[163961]: 2025-10-09 09:55:50.339 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 09:55:50 compute-2 nova_compute[163961]: 2025-10-09 09:55:50.339 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:55:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:55:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:55:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:55:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:55:51 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2594520052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:51 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/576738267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:51 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3508635897' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:51.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:51 compute-2 nova_compute[163961]: 2025-10-09 09:55:51.314 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:51 compute-2 nova_compute[163961]: 2025-10-09 09:55:51.314 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:51 compute-2 nova_compute[163961]: 2025-10-09 09:55:51.314 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 09:55:51 compute-2 nova_compute[163961]: 2025-10-09 09:55:51.314 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 09:55:51 compute-2 nova_compute[163961]: 2025-10-09 09:55:51.324 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 09:55:51 compute-2 nova_compute[163961]: 2025-10-09 09:55:51.324 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:51 compute-2 nova_compute[163961]: 2025-10-09 09:55:51.324 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:51 compute-2 nova_compute[163961]: 2025-10-09 09:55:51.324 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:55:51 compute-2 sudo[166333]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:55:51 compute-2 sudo[166333]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:55:51 compute-2 sudo[166333]: pam_unix(sudo:session): session closed for user root
Oct 09 09:55:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:52 compute-2 ceph-mon[5983]: pgmap v679: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 16 KiB/s wr, 9 op/s
Oct 09 09:55:52 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3208036889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:55:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:52.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:53 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:55:53.198 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:55:53 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:55:53.199 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 09:55:53 compute-2 podman[166360]: 2025-10-09 09:55:53.232515268 +0000 UTC m=+0.063011874 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 09:55:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:55:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:53.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:55:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:54 compute-2 ceph-mon[5983]: pgmap v680: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.4 KiB/s wr, 8 op/s
Oct 09 09:55:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:54.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3156609996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:55:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2397916184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:55:55 compute-2 podman[166377]: 2025-10-09 09:55:55.201284378 +0000 UTC m=+0.037359268 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 09 09:55:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:55.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:55:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:55:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:55:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:55:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:55:56 compute-2 ceph-mon[5983]: pgmap v681: 337 pgs: 337 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 3.4 KiB/s wr, 8 op/s
Oct 09 09:55:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:56.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:57.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:58 compute-2 ceph-mon[5983]: pgmap v682: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct 09 09:55:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:55:58.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:55:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:55:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:55:59.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:55:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:55:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:55:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:55:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:55:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:00 compute-2 ceph-mon[5983]: pgmap v683: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct 09 09:56:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:56:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:00.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:56:00 compute-2 sudo[166402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:56:00 compute-2 sudo[166402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:56:00 compute-2 sudo[166402]: pam_unix(sudo:session): session closed for user root
Oct 09 09:56:00 compute-2 sudo[166427]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 09 09:56:00 compute-2 sudo[166427]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:56:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:56:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:56:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:56:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:56:01 compute-2 sudo[166427]: pam_unix(sudo:session): session closed for user root
Oct 09 09:56:01 compute-2 sudo[166470]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:56:01 compute-2 sudo[166470]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:56:01 compute-2 sudo[166470]: pam_unix(sudo:session): session closed for user root
Oct 09 09:56:01 compute-2 sudo[166495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:56:01 compute-2 sudo[166495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:56:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:01.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:01 compute-2 sudo[166495]: pam_unix(sudo:session): session closed for user root
Oct 09 09:56:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:02 compute-2 ceph-mon[5983]: pgmap v684: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 09 09:56:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:56:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:56:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:56:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:56:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:56:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:56:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:56:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:56:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:56:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:56:02 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:56:02 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:56:02.200 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:56:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:56:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:02.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:56:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:03 compute-2 ceph-mon[5983]: pgmap v685: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 113 op/s
Oct 09 09:56:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:03.275 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:04 compute-2 podman[166552]: 2025-10-09 09:56:04.218751082 +0000 UTC m=+0.054704016 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 09 09:56:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:04.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:05 compute-2 ceph-mon[5983]: pgmap v686: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 113 op/s
Oct 09 09:56:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:56:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:05.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:05 compute-2 sudo[166578]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:56:05 compute-2 sudo[166578]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:56:05 compute-2 sudo[166578]: pam_unix(sudo:session): session closed for user root
Oct 09 09:56:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:56:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:56:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:56:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:56:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:06.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:56:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:56:06 compute-2 ceph-mon[5983]: pgmap v687: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 114 op/s
Oct 09 09:56:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 09:56:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 6965 writes, 28K keys, 6965 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6965 writes, 1430 syncs, 4.87 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 985 writes, 2603 keys, 985 commit groups, 1.0 writes per commit group, ingest: 2.82 MB, 0.00 MB/s
                                           Interval WAL: 985 writes, 447 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a590#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a590#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a590#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 09 09:56:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:07.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:56:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:08.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:56:08 compute-2 ceph-mon[5983]: pgmap v688: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 18 KiB/s wr, 83 op/s
Oct 09 09:56:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:56:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:09.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:56:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:56:10.275 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:56:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:56:10.276 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:56:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:56:10.276 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:56:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:10.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:10 compute-2 ceph-mon[5983]: pgmap v689: 337 pgs: 337 active+clean; 167 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 18 KiB/s wr, 83 op/s
Oct 09 09:56:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:56:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:56:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:56:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:56:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:11.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:11 compute-2 sudo[166609]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:56:11 compute-2 sudo[166609]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:56:11 compute-2 sudo[166609]: pam_unix(sudo:session): session closed for user root
Oct 09 09:56:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 09 09:56:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/335239506' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:56:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 09 09:56:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/335239506' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:56:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:12.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:12 compute-2 ceph-mon[5983]: pgmap v690: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 204 KiB/s rd, 2.4 MiB/s wr, 67 op/s
Oct 09 09:56:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/335239506' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:56:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/335239506' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:56:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:13 compute-2 podman[166635]: 2025-10-09 09:56:13.207328373 +0000 UTC m=+0.042078983 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 09 09:56:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:13.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:14.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:14 compute-2 ceph-mon[5983]: pgmap v691: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 09 09:56:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:15.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct 09 09:56:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Oct 09 09:56:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Oct 09 09:56:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Oct 09 09:56:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Oct 09 09:56:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Oct 09 09:56:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:56:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:56:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:56:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:56:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:16.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:16 compute-2 ceph-mon[5983]: pgmap v692: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 184 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 09 09:56:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:17.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:56:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:18.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:56:18 compute-2 ceph-mon[5983]: pgmap v693: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 09 09:56:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:56:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:19.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:56:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:56:19 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/12413591' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:20.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:20 compute-2 ceph-mon[5983]: pgmap v694: 337 pgs: 337 active+clean; 200 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 183 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct 09 09:56:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:56:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:56:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:56:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:56:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:56:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:21.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:56:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:56:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:22.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:56:22 compute-2 ceph-mon[5983]: pgmap v695: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.2 MiB/s wr, 232 op/s
Oct 09 09:56:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:23.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:24 compute-2 podman[166664]: 2025-10-09 09:56:24.20132848 +0000 UTC m=+0.034384081 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 09 09:56:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:24.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:24 compute-2 ceph-mon[5983]: pgmap v696: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 24 KiB/s wr, 173 op/s
Oct 09 09:56:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:56:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:25.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:56:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:56:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:56:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:56:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:56:26 compute-2 podman[166682]: 2025-10-09 09:56:26.21072156 +0000 UTC m=+0.047233848 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Oct 09 09:56:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:56:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:26.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:56:26 compute-2 ceph-mon[5983]: pgmap v697: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 106 KiB/s rd, 24 KiB/s wr, 173 op/s
Oct 09 09:56:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:27.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:28.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:28 compute-2 ceph-mon[5983]: pgmap v698: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 22 KiB/s wr, 172 op/s
Oct 09 09:56:28 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2733332451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:29.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:56:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:30.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:56:30 compute-2 ceph-mon[5983]: pgmap v699: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 105 KiB/s rd, 11 KiB/s wr, 172 op/s
Oct 09 09:56:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:56:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:56:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:56:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:56:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:31.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:31 compute-2 sudo[166705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:56:31 compute-2 sudo[166705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:56:31 compute-2 sudo[166705]: pam_unix(sudo:session): session closed for user root
Oct 09 09:56:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:56:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:32.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:56:32 compute-2 ceph-mon[5983]: pgmap v700: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 125 KiB/s rd, 12 KiB/s wr, 200 op/s
Oct 09 09:56:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:56:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:33.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:56:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:56:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:34.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:56:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:34 compute-2 ceph-mon[5983]: pgmap v701: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 09 09:56:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:56:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:35 compute-2 podman[166733]: 2025-10-09 09:56:35.219030676 +0000 UTC m=+0.051273301 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 09 09:56:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:56:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:35.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:56:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:56:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:56:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:56:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:56:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:56:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:36.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:56:36 compute-2 ceph-mon[5983]: pgmap v702: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 09 09:56:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:56:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:37.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:56:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:38.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:38 compute-2 ceph-mon[5983]: pgmap v703: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 09 09:56:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:39.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:40.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:40 compute-2 ceph-mon[5983]: pgmap v704: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 09 09:56:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:56:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:56:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:56:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:56:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:41.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:42.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:42 compute-2 ceph-mon[5983]: pgmap v705: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 09 09:56:42 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1320803762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:43.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:44 compute-2 podman[166766]: 2025-10-09 09:56:44.206362109 +0000 UTC m=+0.038866719 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:56:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:44.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:44 compute-2 ceph-mon[5983]: pgmap v706: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:56:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:56:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:45.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:56:45 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1755900932' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:56:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:56:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:56:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:56:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:56:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:46.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:46 compute-2 ceph-mon[5983]: pgmap v707: 337 pgs: 337 active+clean; 54 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 336 KiB/s wr, 4 op/s
Oct 09 09:56:46 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1438079082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:56:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:56:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:47.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:56:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:48.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:48 compute-2 ceph-mon[5983]: pgmap v708: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 09:56:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:49 compute-2 nova_compute[163961]: 2025-10-09 09:56:49.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:56:49 compute-2 nova_compute[163961]: 2025-10-09 09:56:49.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:56:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:56:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:49.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:56:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:56:49 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3254798988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3254798988' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:56:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:50 compute-2 nova_compute[163961]: 2025-10-09 09:56:50.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:56:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:56:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:50.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:56:50 compute-2 ceph-mon[5983]: pgmap v709: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 09:56:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3169311562' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1075486111' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:56:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:56:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:56:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.173 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.189 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.190 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.190 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.190 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.190 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:56:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:51.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:51 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:56:51 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1845698345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.533 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:56:51 compute-2 sudo[166813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:56:51 compute-2 sudo[166813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:56:51 compute-2 sudo[166813]: pam_unix(sudo:session): session closed for user root
Oct 09 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.736 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.737 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5354MB free_disk=59.967525482177734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.737 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.738 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.778 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.779 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 09:56:51 compute-2 nova_compute[163961]: 2025-10-09 09:56:51.790 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:56:51 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2705111090' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:51 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1845698345' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:52 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:56:52 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1583333975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:52 compute-2 nova_compute[163961]: 2025-10-09 09:56:52.137 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:56:52 compute-2 nova_compute[163961]: 2025-10-09 09:56:52.140 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:56:52 compute-2 nova_compute[163961]: 2025-10-09 09:56:52.150 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:56:52 compute-2 nova_compute[163961]: 2025-10-09 09:56:52.151 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 09:56:52 compute-2 nova_compute[163961]: 2025-10-09 09:56:52.151 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.414s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:56:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:56:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:52.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:56:52 compute-2 ceph-mon[5983]: pgmap v710: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 09 09:56:52 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1583333975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:56:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:53 compute-2 nova_compute[163961]: 2025-10-09 09:56:53.147 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:56:53 compute-2 nova_compute[163961]: 2025-10-09 09:56:53.147 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:56:53 compute-2 nova_compute[163961]: 2025-10-09 09:56:53.147 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 09:56:53 compute-2 nova_compute[163961]: 2025-10-09 09:56:53.147 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 09:56:53 compute-2 nova_compute[163961]: 2025-10-09 09:56:53.157 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 09:56:53 compute-2 nova_compute[163961]: 2025-10-09 09:56:53.157 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:56:53 compute-2 nova_compute[163961]: 2025-10-09 09:56:53.157 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:56:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:53.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:54 compute-2 nova_compute[163961]: 2025-10-09 09:56:54.177 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:56:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:56:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:54.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:56:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:54 compute-2 ceph-mon[5983]: pgmap v711: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 09 09:56:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:55 compute-2 podman[166863]: 2025-10-09 09:56:55.203512402 +0000 UTC m=+0.035988053 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct 09 09:56:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:55.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:56:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:56:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:56:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:56:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:56:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:56:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:56.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:56:56 compute-2 ceph-mon[5983]: pgmap v712: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 09 09:56:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:57 compute-2 podman[166881]: 2025-10-09 09:56:57.209342047 +0000 UTC m=+0.041808894 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct 09 09:56:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:56:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:57.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:56:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:56:58.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:58 compute-2 ceph-mon[5983]: pgmap v713: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 99 op/s
Oct 09 09:56:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:56:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:56:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:56:59.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:56:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:56:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:56:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:56:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:56:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:00.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:00 compute-2 ceph-mon[5983]: pgmap v714: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 09 09:57:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:57:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:57:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:57:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:57:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:57:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:01.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:57:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:02.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:02 compute-2 ceph-mon[5983]: pgmap v715: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct 09 09:57:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:03.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:04.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:04 compute-2 ceph-mon[5983]: pgmap v716: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 09 09:57:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:57:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:57:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:57:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:57:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:57:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:05.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:05 compute-2 sudo[166907]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:57:05 compute-2 sudo[166907]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:57:05 compute-2 sudo[166907]: pam_unix(sudo:session): session closed for user root
Oct 09 09:57:05 compute-2 sudo[166938]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:57:05 compute-2 sudo[166938]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:57:05 compute-2 podman[166931]: 2025-10-09 09:57:05.817586562 +0000 UTC m=+0.062204036 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct 09 09:57:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:06 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:57:06.085 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:57:06 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:57:06.086 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 09:57:06 compute-2 sudo[166938]: pam_unix(sudo:session): session closed for user root
Oct 09 09:57:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:06.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:06 compute-2 ceph-mon[5983]: pgmap v717: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 09 09:57:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:57:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:57:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:57:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:57:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:57:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:57:06 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:57:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:07.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:07 compute-2 ceph-mon[5983]: pgmap v718: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.4 MiB/s wr, 73 op/s
Oct 09 09:57:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:08.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:09.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:09 compute-2 ceph-mon[5983]: pgmap v719: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.4 MiB/s wr, 73 op/s
Oct 09 09:57:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:57:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:57:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:57:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:57:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:57:10.087 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:57:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:57:10.276 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:57:10.276 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:57:10.276 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:10.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:10 compute-2 sudo[167014]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:57:10 compute-2 sudo[167014]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:57:10 compute-2 sudo[167014]: pam_unix(sudo:session): session closed for user root
Oct 09 09:57:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:57:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:57:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:11.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:11 compute-2 sudo[167040]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:57:11 compute-2 sudo[167040]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:57:11 compute-2 sudo[167040]: pam_unix(sudo:session): session closed for user root
Oct 09 09:57:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 09 09:57:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3137381097' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:57:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 09 09:57:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3137381097' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:57:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:12 compute-2 ceph-mon[5983]: pgmap v720: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 324 KiB/s rd, 2.4 MiB/s wr, 73 op/s
Oct 09 09:57:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/3137381097' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:57:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/3137381097' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:57:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:12.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:13.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:14 compute-2 ceph-mon[5983]: pgmap v721: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 18 KiB/s wr, 2 op/s
Oct 09 09:57:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:14.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:57:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:57:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:57:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:57:15 compute-2 podman[167068]: 2025-10-09 09:57:15.20738132 +0000 UTC m=+0.040826750 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct 09 09:57:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:15.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:16 compute-2 ceph-mon[5983]: pgmap v722: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 18 KiB/s wr, 2 op/s
Oct 09 09:57:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:16.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:17.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:18 compute-2 ceph-mon[5983]: pgmap v723: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 8.3 KiB/s rd, 6.4 KiB/s wr, 2 op/s
Oct 09 09:57:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:18.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:19.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:57:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:57:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:57:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:57:20 compute-2 ceph-mon[5983]: pgmap v724: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 4.7 KiB/s wr, 1 op/s
Oct 09 09:57:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:57:20 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3832152381' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:20.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:21.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:22 compute-2 ceph-mon[5983]: pgmap v725: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.3 KiB/s rd, 4.7 KiB/s wr, 1 op/s
Oct 09 09:57:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:22.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:23.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:24 compute-2 ceph-mon[5983]: pgmap v726: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 9.8 KiB/s wr, 30 op/s
Oct 09 09:57:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:24.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:57:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:57:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:57:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:57:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:57:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:25.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:57:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:26 compute-2 podman[167097]: 2025-10-09 09:57:26.200396438 +0000 UTC m=+0.036814699 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 09 09:57:26 compute-2 ceph-mon[5983]: pgmap v727: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Oct 09 09:57:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:26.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:27.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:28 compute-2 podman[167114]: 2025-10-09 09:57:28.228419471 +0000 UTC m=+0.064420373 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true)
Oct 09 09:57:28 compute-2 ceph-mon[5983]: pgmap v728: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 5.2 KiB/s wr, 30 op/s
Oct 09 09:57:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:28.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:57:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:29.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:57:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:57:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:57:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:57:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:57:30 compute-2 ceph-mon[5983]: pgmap v729: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Oct 09 09:57:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:30.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:31.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:31 compute-2 sudo[167135]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:57:31 compute-2 sudo[167135]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:57:31 compute-2 sudo[167135]: pam_unix(sudo:session): session closed for user root
Oct 09 09:57:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:32 compute-2 ceph-mon[5983]: pgmap v730: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Oct 09 09:57:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:32.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:33 compute-2 ceph-mon[5983]: pgmap v731: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 5.2 KiB/s wr, 29 op/s
Oct 09 09:57:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:33.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:34.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:57:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:57:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:57:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:57:35 compute-2 ceph-mon[5983]: pgmap v732: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:57:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #40. Immutable memtables: 0.
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.357225) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 40
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855357244, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2398, "num_deletes": 251, "total_data_size": 6398910, "memory_usage": 6495200, "flush_reason": "Manual Compaction"}
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #41: started
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855367040, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 41, "file_size": 4153932, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20785, "largest_seqno": 23178, "table_properties": {"data_size": 4144126, "index_size": 6236, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20058, "raw_average_key_size": 20, "raw_value_size": 4124479, "raw_average_value_size": 4187, "num_data_blocks": 272, "num_entries": 985, "num_filter_entries": 985, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003650, "oldest_key_time": 1760003650, "file_creation_time": 1760003855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 9841 microseconds, and 7092 cpu microseconds.
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.367065) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #41: 4153932 bytes OK
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.367077) [db/memtable_list.cc:519] [default] Level-0 commit table #41 started
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.367555) [db/memtable_list.cc:722] [default] Level-0 commit table #41: memtable #1 done
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.367566) EVENT_LOG_v1 {"time_micros": 1760003855367563, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.367576) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 6388365, prev total WAL file size 6388365, number of live WAL files 2.
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000037.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.368405) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [41(4056KB)], [39(12MB)]
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855368431, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [41], "files_L6": [39], "score": -1, "input_data_size": 16775220, "oldest_snapshot_seqno": -1}
Oct 09 09:57:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:35.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #42: 5401 keys, 14600848 bytes, temperature: kUnknown
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855415357, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 42, "file_size": 14600848, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14562498, "index_size": 23776, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 136188, "raw_average_key_size": 25, "raw_value_size": 14462053, "raw_average_value_size": 2677, "num_data_blocks": 981, "num_entries": 5401, "num_filter_entries": 5401, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760003855, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 42, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.415518) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 14600848 bytes
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.416020) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 357.1 rd, 310.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.0, 12.0 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 5925, records dropped: 524 output_compression: NoCompression
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.416036) EVENT_LOG_v1 {"time_micros": 1760003855416029, "job": 22, "event": "compaction_finished", "compaction_time_micros": 46972, "compaction_time_cpu_micros": 21253, "output_level": 6, "num_output_files": 1, "total_output_size": 14600848, "num_input_records": 5925, "num_output_records": 5401, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855416595, "job": 22, "event": "table_file_deletion", "file_number": 41}
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000039.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003855418110, "job": 22, "event": "table_file_deletion", "file_number": 39}
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.368375) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.418239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.418243) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.418245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.418246) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:57:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:57:35.418247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:57:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:36 compute-2 podman[167164]: 2025-10-09 09:57:36.219276584 +0000 UTC m=+0.050786760 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller)
Oct 09 09:57:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:36.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:37 compute-2 ceph-mon[5983]: pgmap v733: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 09:57:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:57:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:37.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:57:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:38.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:39 compute-2 ceph-mon[5983]: pgmap v734: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:57:39 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/138502189' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:39.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:57:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:57:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:57:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:57:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:40.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:41 compute-2 ceph-mon[5983]: pgmap v735: 337 pgs: 337 active+clean; 41 MiB data, 263 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:57:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:41.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:42.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:43 compute-2 ceph-mon[5983]: pgmap v736: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 09:57:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:43.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:44.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:57:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:57:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:57:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:57:45 compute-2 ceph-mon[5983]: pgmap v737: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 09:57:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:45.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:46 compute-2 podman[167198]: 2025-10-09 09:57:46.229373218 +0000 UTC m=+0.065468268 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, org.label-schema.vendor=CentOS)
Oct 09 09:57:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:46.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:47 compute-2 ceph-mon[5983]: pgmap v738: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 09:57:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:47.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:48.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:49.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:49 compute-2 ceph-mon[5983]: pgmap v739: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 09:57:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2737454131' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:57:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1167411416' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:57:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:57:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:57:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:57:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:57:50 compute-2 nova_compute[163961]: 2025-10-09 09:57:50.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:57:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:57:50 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3090346491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:50.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/938447497' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:57:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3090346491' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:51 compute-2 nova_compute[163961]: 2025-10-09 09:57:51.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:57:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:51.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:51 compute-2 ceph-mon[5983]: pgmap v740: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 09:57:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:51 compute-2 sudo[167221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:57:51 compute-2 sudo[167221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:57:51 compute-2 sudo[167221]: pam_unix(sudo:session): session closed for user root
Oct 09 09:57:52 compute-2 nova_compute[163961]: 2025-10-09 09:57:52.167 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:57:52 compute-2 nova_compute[163961]: 2025-10-09 09:57:52.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:57:52 compute-2 nova_compute[163961]: 2025-10-09 09:57:52.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:57:52 compute-2 nova_compute[163961]: 2025-10-09 09:57:52.171 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 09:57:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:52.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.188 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.189 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.189 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.189 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.190 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:57:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:53.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:53 compute-2 ceph-mon[5983]: pgmap v741: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct 09 09:57:53 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3044383655' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:53 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3402390666' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:53 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:57:53 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/872108722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.533 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.723 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.724 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5346MB free_disk=59.96738052368164GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.724 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.725 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.769 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.770 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 09:57:53 compute-2 nova_compute[163961]: 2025-10-09 09:57:53.782 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:57:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:54 compute-2 nova_compute[163961]: 2025-10-09 09:57:54.117 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:57:54 compute-2 nova_compute[163961]: 2025-10-09 09:57:54.120 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:57:54 compute-2 nova_compute[163961]: 2025-10-09 09:57:54.129 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:57:54 compute-2 nova_compute[163961]: 2025-10-09 09:57:54.130 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 09:57:54 compute-2 nova_compute[163961]: 2025-10-09 09:57:54.130 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:57:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:54.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/872108722' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3718288945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:57:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:57:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:57:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:57:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:57:55 compute-2 nova_compute[163961]: 2025-10-09 09:57:55.131 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:57:55 compute-2 nova_compute[163961]: 2025-10-09 09:57:55.131 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 09:57:55 compute-2 nova_compute[163961]: 2025-10-09 09:57:55.131 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 09:57:55 compute-2 nova_compute[163961]: 2025-10-09 09:57:55.147 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 09:57:55 compute-2 nova_compute[163961]: 2025-10-09 09:57:55.147 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:57:55 compute-2 nova_compute[163961]: 2025-10-09 09:57:55.147 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:57:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:57:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:55.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:57:55 compute-2 ceph-mon[5983]: pgmap v742: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 12 KiB/s wr, 10 op/s
Oct 09 09:57:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:57:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:56.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:57:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:57 compute-2 podman[167295]: 2025-10-09 09:57:57.197057461 +0000 UTC m=+0.033303983 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 09 09:57:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:57.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:57 compute-2 ceph-mon[5983]: pgmap v743: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Oct 09 09:57:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:57:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:57:58.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:57:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:59 compute-2 podman[167313]: 2025-10-09 09:57:59.202194514 +0000 UTC m=+0.034893288 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 09 09:57:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:57:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:57:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:57:59.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:57:59 compute-2 ceph-mon[5983]: pgmap v744: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 09 09:57:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:57:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:57:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:57:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:57:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:58:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:58:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:57:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:58:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:58:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:00.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:01.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:01 compute-2 ceph-mon[5983]: pgmap v745: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 09 09:58:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:58:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:02.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:58:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:03.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:03 compute-2 ceph-mon[5983]: pgmap v746: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Oct 09 09:58:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:04.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:58:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:58:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:58:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:58:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:05.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:05 compute-2 ceph-mon[5983]: pgmap v747: 337 pgs: 337 active+clean; 109 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.0 MiB/s wr, 106 op/s
Oct 09 09:58:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:58:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:06.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:07 compute-2 podman[167340]: 2025-10-09 09:58:07.233365079 +0000 UTC m=+0.064393752 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct 09 09:58:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:07.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:07 compute-2 ceph-mon[5983]: pgmap v748: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct 09 09:58:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:58:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:08.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:58:08 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:58:08.859 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:58:08 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:58:08.860 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 09:58:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:09.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:09 compute-2 ceph-mon[5983]: pgmap v749: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 09 09:58:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:58:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:58:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:58:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:58:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:58:10.276 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:58:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:58:10.277 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:58:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:58:10.277 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:58:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:10.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:10 compute-2 sudo[167367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:58:10 compute-2 sudo[167367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:58:10 compute-2 sudo[167367]: pam_unix(sudo:session): session closed for user root
Oct 09 09:58:10 compute-2 sudo[167392]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:58:10 compute-2 sudo[167392]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:58:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:10 compute-2 sudo[167392]: pam_unix(sudo:session): session closed for user root
Oct 09 09:58:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:11.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:11 compute-2 ceph-mon[5983]: pgmap v750: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 287 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 09 09:58:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 09 09:58:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 09 09:58:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 09 09:58:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:58:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:58:11 compute-2 ceph-mon[5983]: pgmap v751: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.4 MiB/s wr, 73 op/s
Oct 09 09:58:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:58:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:58:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:58:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:58:11 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:58:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:11 compute-2 sudo[167447]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:58:11 compute-2 sudo[167447]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:58:11 compute-2 sudo[167447]: pam_unix(sudo:session): session closed for user root
Oct 09 09:58:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:12.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/2643942600' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:58:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/2643942600' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:58:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:13.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:13 compute-2 ceph-mon[5983]: pgmap v752: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 121 KiB/s wr, 26 op/s
Oct 09 09:58:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:14.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:58:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:58:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:58:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:58:15 compute-2 sudo[167475]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:58:15 compute-2 sudo[167475]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:58:15 compute-2 sudo[167475]: pam_unix(sudo:session): session closed for user root
Oct 09 09:58:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:15.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:15 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:58:15.861 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:58:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:58:15 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:58:15 compute-2 ceph-mon[5983]: pgmap v753: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 179 KiB/s rd, 121 KiB/s wr, 26 op/s
Oct 09 09:58:15 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2262883313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:16.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:17 compute-2 podman[167502]: 2025-10-09 09:58:17.21139485 +0000 UTC m=+0.041938204 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:58:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:17.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:18 compute-2 ceph-mon[5983]: pgmap v754: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 17 KiB/s wr, 2 op/s
Oct 09 09:58:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:58:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:18.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:58:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:58:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:19.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:58:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:58:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:58:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:58:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:58:20 compute-2 ceph-mon[5983]: pgmap v755: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 17 KiB/s wr, 2 op/s
Oct 09 09:58:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:58:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:20.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:21.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:22 compute-2 ceph-mon[5983]: pgmap v756: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.0 MiB/s wr, 33 op/s
Oct 09 09:58:22 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2004045298' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:58:22 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3984953572' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:58:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:22.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:23.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:24 compute-2 ceph-mon[5983]: pgmap v757: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 09:58:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:24.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:58:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:58:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:58:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:58:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:25.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:26 compute-2 ceph-mon[5983]: pgmap v758: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 09:58:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:26.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:27.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:28 compute-2 ceph-mon[5983]: pgmap v759: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 09 09:58:28 compute-2 podman[167532]: 2025-10-09 09:58:28.207546146 +0000 UTC m=+0.035966597 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:58:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:28.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:29.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:58:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:58:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:58:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:58:30 compute-2 ceph-mon[5983]: pgmap v760: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 09 09:58:30 compute-2 podman[167550]: 2025-10-09 09:58:30.20041986 +0000 UTC m=+0.036907791 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd)
Oct 09 09:58:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:30.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:31.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:32 compute-2 sudo[167569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:58:32 compute-2 sudo[167569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:58:32 compute-2 sudo[167569]: pam_unix(sudo:session): session closed for user root
Oct 09 09:58:32 compute-2 ceph-mon[5983]: pgmap v761: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 09 09:58:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:32.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:33.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:34 compute-2 ceph-mon[5983]: pgmap v762: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 09 09:58:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:34.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:58:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:58:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:58:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:58:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:58:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:35.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:36 compute-2 ceph-mon[5983]: pgmap v763: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 09 09:58:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:36.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:37.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:38 compute-2 ceph-mon[5983]: pgmap v764: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Oct 09 09:58:38 compute-2 podman[167600]: 2025-10-09 09:58:38.223743301 +0000 UTC m=+0.059226552 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:58:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:38.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:39.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:58:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:58:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:58:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:58:40 compute-2 ceph-mon[5983]: pgmap v765: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 221 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 09 09:58:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:40.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:41.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:42 compute-2 ceph-mon[5983]: pgmap v766: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 221 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 09 09:58:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:42.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:43.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:44 compute-2 ceph-mon[5983]: pgmap v767: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 221 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 09 09:58:44 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2037300683' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:44.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:58:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:58:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:58:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:58:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:45.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:46 compute-2 ceph-mon[5983]: pgmap v768: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 221 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 09 09:58:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:46.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:47 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1068777579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:47.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:48 compute-2 ceph-mon[5983]: pgmap v769: 337 pgs: 337 active+clean; 48 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 259 KiB/s rd, 2.1 MiB/s wr, 117 op/s
Oct 09 09:58:48 compute-2 podman[167633]: 2025-10-09 09:58:48.209407844 +0000 UTC m=+0.045203980 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.schema-version=1.0)
Oct 09 09:58:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:48.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:58:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:58:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:58:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:58:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:49.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:50 compute-2 ceph-mon[5983]: pgmap v770: 337 pgs: 337 active+clean; 48 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 56 op/s
Oct 09 09:58:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:58:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:50.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:51 compute-2 nova_compute[163961]: 2025-10-09 09:58:51.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:58:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:51.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:52 compute-2 sudo[167654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:58:52 compute-2 sudo[167654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:58:52 compute-2 sudo[167654]: pam_unix(sudo:session): session closed for user root
Oct 09 09:58:52 compute-2 nova_compute[163961]: 2025-10-09 09:58:52.167 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:58:52 compute-2 nova_compute[163961]: 2025-10-09 09:58:52.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:58:52 compute-2 ceph-mon[5983]: pgmap v771: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 57 op/s
Oct 09 09:58:52 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/78441123' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:52 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/4092825429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:52.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.190 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.191 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.191 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.191 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.191 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:58:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:53.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:53 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:58:53 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3219886149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.527 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.715 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.716 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5340MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.716 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.717 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.755 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.756 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 09:58:53 compute-2 nova_compute[163961]: 2025-10-09 09:58:53.769 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:58:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:58:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:58:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:58:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:58:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:58:54 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/912181250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:54 compute-2 nova_compute[163961]: 2025-10-09 09:58:54.116 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:58:54 compute-2 nova_compute[163961]: 2025-10-09 09:58:54.120 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:58:54 compute-2 nova_compute[163961]: 2025-10-09 09:58:54.130 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:58:54 compute-2 nova_compute[163961]: 2025-10-09 09:58:54.131 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 09:58:54 compute-2 nova_compute[163961]: 2025-10-09 09:58:54.131 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:58:54 compute-2 ceph-mon[5983]: pgmap v772: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 56 op/s
Oct 09 09:58:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3219886149' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/912181250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:54.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:55 compute-2 nova_compute[163961]: 2025-10-09 09:58:55.131 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:58:55 compute-2 nova_compute[163961]: 2025-10-09 09:58:55.132 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:58:55 compute-2 nova_compute[163961]: 2025-10-09 09:58:55.132 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 09:58:55 compute-2 nova_compute[163961]: 2025-10-09 09:58:55.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:58:55 compute-2 nova_compute[163961]: 2025-10-09 09:58:55.171 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 09:58:55 compute-2 nova_compute[163961]: 2025-10-09 09:58:55.171 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 09:58:55 compute-2 nova_compute[163961]: 2025-10-09 09:58:55.181 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 09:58:55 compute-2 nova_compute[163961]: 2025-10-09 09:58:55.181 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:58:55 compute-2 nova_compute[163961]: 2025-10-09 09:58:55.182 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:58:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/474588185' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:55.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:56 compute-2 ceph-mon[5983]: pgmap v773: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 56 op/s
Oct 09 09:58:56 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3921144099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:58:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:56.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:57 compute-2 nova_compute[163961]: 2025-10-09 09:58:57.177 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:58:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:57.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:58:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:58:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:58:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:58:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:58:58 compute-2 ceph-mon[5983]: pgmap v774: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 15 KiB/s wr, 57 op/s
Oct 09 09:58:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:58:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:58:58.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:58:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:59 compute-2 podman[167730]: 2025-10-09 09:58:59.202460159 +0000 UTC m=+0.034422937 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct 09 09:58:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:58:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:58:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:58:59.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:58:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:58:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:58:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:58:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:58:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:00 compute-2 ceph-mon[5983]: pgmap v775: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 B/s wr, 1 op/s
Oct 09 09:59:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:00.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:00 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:01 compute-2 podman[167748]: 2025-10-09 09:59:01.226920302 +0000 UTC m=+0.048780361 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 09 09:59:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:01.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:01 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:02 compute-2 ceph-mon[5983]: pgmap v776: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 1 op/s
Oct 09 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.314 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.314 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.324 2 DEBUG nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Oct 09 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.374 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.375 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.415 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Oct 09 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.416 2 INFO nova.compute.claims [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Claim successful on node compute-2.ctlplane.example.com
Oct 09 09:59:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:02.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.485 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.836 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.839 2 DEBUG nova.compute.provider_tree [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.854 2 DEBUG nova.scheduler.client.report [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.870 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.496s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.871 2 DEBUG nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Oct 09 09:59:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:02 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.921 2 DEBUG nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Oct 09 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.921 2 DEBUG nova.network.neutron [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 09 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.937 2 INFO nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Oct 09 09:59:02 compute-2 nova_compute[163961]: 2025-10-09 09:59:02.952 2 DEBUG nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Oct 09 09:59:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:59:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:59:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:59:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:59:03 compute-2 nova_compute[163961]: 2025-10-09 09:59:03.064 2 DEBUG nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Oct 09 09:59:03 compute-2 nova_compute[163961]: 2025-10-09 09:59:03.065 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Oct 09 09:59:03 compute-2 nova_compute[163961]: 2025-10-09 09:59:03.065 2 INFO nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Creating image(s)
Oct 09 09:59:03 compute-2 nova_compute[163961]: 2025-10-09 09:59:03.085 2 DEBUG nova.storage.rbd_utils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7c7e2ca-e694-465f-941e-15513c7e91ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:59:03 compute-2 nova_compute[163961]: 2025-10-09 09:59:03.105 2 DEBUG nova.storage.rbd_utils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7c7e2ca-e694-465f-941e-15513c7e91ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:59:03 compute-2 nova_compute[163961]: 2025-10-09 09:59:03.124 2 DEBUG nova.storage.rbd_utils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7c7e2ca-e694-465f-941e-15513c7e91ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:59:03 compute-2 nova_compute[163961]: 2025-10-09 09:59:03.126 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:03 compute-2 nova_compute[163961]: 2025-10-09 09:59:03.127 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:03 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/552648784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:03 compute-2 nova_compute[163961]: 2025-10-09 09:59:03.417 2 DEBUG nova.virt.libvirt.imagebackend [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image locations are: [{'url': 'rbd://286f8bf0-da72-5823-9a4e-ac4457d9e609/images/9546778e-959c-466e-9bef-81ace5bd1cc5/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://286f8bf0-da72-5823-9a4e-ac4457d9e609/images/9546778e-959c-466e-9bef-81ace5bd1cc5/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085
Oct 09 09:59:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:03.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:03 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.014 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.059 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.part --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.060 2 DEBUG nova.virt.images [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] 9546778e-959c-466e-9bef-81ace5bd1cc5 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.061 2 DEBUG nova.privsep.utils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.061 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.part /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.092 2 WARNING oslo_policy.policy [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.093 2 WARNING oslo_policy.policy [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.095 2 DEBUG nova.policy [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.118 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.part /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.converted" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.122 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.166 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb.converted --force-share --output=json" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.167 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "5c8d02c7691a8289e33d8b283b22550ff081dadb" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.184 2 DEBUG nova.storage.rbd_utils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7c7e2ca-e694-465f-941e-15513c7e91ab_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.186 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb c7c7e2ca-e694-465f-941e-15513c7e91ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:59:04 compute-2 ceph-mon[5983]: pgmap v777: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.358 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/5c8d02c7691a8289e33d8b283b22550ff081dadb c7c7e2ca-e694-465f-941e-15513c7e91ab_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.172s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.402 2 DEBUG nova.storage.rbd_utils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] resizing rbd image c7c7e2ca-e694-465f-941e-15513c7e91ab_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288
Oct 09 09:59:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:04.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.457 2 DEBUG nova.objects.instance [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'migration_context' on Instance uuid c7c7e2ca-e694-465f-941e-15513c7e91ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.472 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.472 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Ensure instance console log exists: /var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.473 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.473 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:04 compute-2 nova_compute[163961]: 2025-10-09 09:59:04.473 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:04 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:59:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:05.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:05 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:06 compute-2 nova_compute[163961]: 2025-10-09 09:59:06.292 2 DEBUG nova.network.neutron [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Successfully created port: 55484b13-541c-4895-beab-bdcdaa30f4fe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 09 09:59:06 compute-2 ceph-mon[5983]: pgmap v778: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 09:59:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:06.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:06 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:07.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:07 compute-2 nova_compute[163961]: 2025-10-09 09:59:07.804 2 DEBUG nova.network.neutron [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Successfully updated port: 55484b13-541c-4895-beab-bdcdaa30f4fe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 09 09:59:07 compute-2 nova_compute[163961]: 2025-10-09 09:59:07.816 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:59:07 compute-2 nova_compute[163961]: 2025-10-09 09:59:07.817 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:59:07 compute-2 nova_compute[163961]: 2025-10-09 09:59:07.817 2 DEBUG nova.network.neutron [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 09 09:59:07 compute-2 nova_compute[163961]: 2025-10-09 09:59:07.879 2 DEBUG nova.compute.manager [req-073050e3-7235-4ad8-a159-a69865fd59b9 req-70598753-1ec9-463a-808a-84a0e18a1949 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-changed-55484b13-541c-4895-beab-bdcdaa30f4fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:59:07 compute-2 nova_compute[163961]: 2025-10-09 09:59:07.880 2 DEBUG nova.compute.manager [req-073050e3-7235-4ad8-a159-a69865fd59b9 req-70598753-1ec9-463a-808a-84a0e18a1949 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Refreshing instance network info cache due to event network-changed-55484b13-541c-4895-beab-bdcdaa30f4fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 09:59:07 compute-2 nova_compute[163961]: 2025-10-09 09:59:07.880 2 DEBUG oslo_concurrency.lockutils [req-073050e3-7235-4ad8-a159-a69865fd59b9 req-70598753-1ec9-463a-808a-84a0e18a1949 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:59:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:07 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:07 compute-2 nova_compute[163961]: 2025-10-09 09:59:07.925 2 DEBUG nova.network.neutron [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 09 09:59:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:59:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:59:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:59:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:59:08 compute-2 ceph-mon[5983]: pgmap v779: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct 09 09:59:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:08.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.470 2 DEBUG nova.network.neutron [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updating instance_info_cache with network_info: [{"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.483 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.484 2 DEBUG nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Instance network_info: |[{"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.484 2 DEBUG oslo_concurrency.lockutils [req-073050e3-7235-4ad8-a159-a69865fd59b9 req-70598753-1ec9-463a-808a-84a0e18a1949 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.484 2 DEBUG nova.network.neutron [req-073050e3-7235-4ad8-a159-a69865fd59b9 req-70598753-1ec9-463a-808a-84a0e18a1949 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Refreshing network info cache for port 55484b13-541c-4895-beab-bdcdaa30f4fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.486 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Start _get_guest_xml network_info=[{"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_type': 'disk', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'disk_bus': 'virtio', 'size': 0, 'encrypted': False, 'encryption_secret_uuid': None, 'boot_index': 0, 'image_id': '9546778e-959c-466e-9bef-81ace5bd1cc5'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.489 2 WARNING nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.492 2 DEBUG nova.virt.libvirt.host [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.492 2 DEBUG nova.virt.libvirt.host [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.498 2 DEBUG nova.virt.libvirt.host [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Searching host: 'compute-2.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.498 2 DEBUG nova.virt.libvirt.host [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.499 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.499 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-09T09:54:30Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='6c4b2ce4-c9d2-467c-bac4-dc6a1184a891',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-09T09:54:31Z,direct_url=<?>,disk_format='qcow2',id=9546778e-959c-466e-9bef-81ace5bd1cc5,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='a53d5690b6a54109990182326650a2b8',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-09T09:54:34Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.499 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.499 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.500 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.500 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.500 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.500 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.500 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.501 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.501 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.501 2 DEBUG nova.virt.hardware [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.503 2 DEBUG nova.privsep.utils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.504 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:59:08 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 09 09:59:08 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2385607545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.845 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.862 2 DEBUG nova.storage.rbd_utils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7c7e2ca-e694-465f-941e-15513c7e91ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:59:08 compute-2 nova_compute[163961]: 2025-10-09 09:59:08.865 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:59:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:08 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.174 2 DEBUG nova.network.neutron [req-073050e3-7235-4ad8-a159-a69865fd59b9 req-70598753-1ec9-463a-808a-84a0e18a1949 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updated VIF entry in instance network info cache for port 55484b13-541c-4895-beab-bdcdaa30f4fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.175 2 DEBUG nova.network.neutron [req-073050e3-7235-4ad8-a159-a69865fd59b9 req-70598753-1ec9-463a-808a-84a0e18a1949 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updating instance_info_cache with network_info: [{"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.189 2 DEBUG oslo_concurrency.lockutils [req-073050e3-7235-4ad8-a159-a69865fd59b9 req-70598753-1ec9-463a-808a-84a0e18a1949 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:59:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Oct 09 09:59:09 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2240393634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.213 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.214 2 DEBUG nova.virt.libvirt.vif [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:59:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1164663661',display_name='tempest-TestNetworkBasicOps-server-1164663661',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1164663661',id=6,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJekZCUuyZFfRi4sqQ/mP7Ozivo49QKXFHHjMUzJNdIpXHKQgOnPPcVpZjnx45IP0IUXYjxjP4OCv7gqvDPFNQ0nZIMIyF69sokT4DnjnPbGTb16o+q+6RbNVaDlRNZ6mw==',key_name='tempest-TestNetworkBasicOps-1319761674',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-mxb6tzm8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:59:02Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=c7c7e2ca-e694-465f-941e-15513c7e91ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.215 2 DEBUG nova.network.os_vif_util [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.216 2 DEBUG nova.network.os_vif_util [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:1b:2f,bridge_name='br-int',has_traffic_filtering=True,id=55484b13-541c-4895-beab-bdcdaa30f4fe,network=Network(ab21f371-26e2-4c4f-bba0-3c44fb308723),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55484b13-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.217 2 DEBUG nova.objects.instance [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_devices' on Instance uuid c7c7e2ca-e694-465f-941e-15513c7e91ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:59:09 compute-2 podman[168033]: 2025-10-09 09:59:09.228110012 +0000 UTC m=+0.063713670 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.229 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] End _get_guest_xml xml=<domain type="kvm">
Oct 09 09:59:09 compute-2 nova_compute[163961]:   <uuid>c7c7e2ca-e694-465f-941e-15513c7e91ab</uuid>
Oct 09 09:59:09 compute-2 nova_compute[163961]:   <name>instance-00000006</name>
Oct 09 09:59:09 compute-2 nova_compute[163961]:   <memory>131072</memory>
Oct 09 09:59:09 compute-2 nova_compute[163961]:   <vcpu>1</vcpu>
Oct 09 09:59:09 compute-2 nova_compute[163961]:   <metadata>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <nova:name>tempest-TestNetworkBasicOps-server-1164663661</nova:name>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <nova:creationTime>2025-10-09 09:59:08</nova:creationTime>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <nova:flavor name="m1.nano">
Oct 09 09:59:09 compute-2 nova_compute[163961]:         <nova:memory>128</nova:memory>
Oct 09 09:59:09 compute-2 nova_compute[163961]:         <nova:disk>1</nova:disk>
Oct 09 09:59:09 compute-2 nova_compute[163961]:         <nova:swap>0</nova:swap>
Oct 09 09:59:09 compute-2 nova_compute[163961]:         <nova:ephemeral>0</nova:ephemeral>
Oct 09 09:59:09 compute-2 nova_compute[163961]:         <nova:vcpus>1</nova:vcpus>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       </nova:flavor>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <nova:owner>
Oct 09 09:59:09 compute-2 nova_compute[163961]:         <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct 09 09:59:09 compute-2 nova_compute[163961]:         <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       </nova:owner>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <nova:ports>
Oct 09 09:59:09 compute-2 nova_compute[163961]:         <nova:port uuid="55484b13-541c-4895-beab-bdcdaa30f4fe">
Oct 09 09:59:09 compute-2 nova_compute[163961]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:         </nova:port>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       </nova:ports>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     </nova:instance>
Oct 09 09:59:09 compute-2 nova_compute[163961]:   </metadata>
Oct 09 09:59:09 compute-2 nova_compute[163961]:   <sysinfo type="smbios">
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <system>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <entry name="manufacturer">RDO</entry>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <entry name="product">OpenStack Compute</entry>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <entry name="serial">c7c7e2ca-e694-465f-941e-15513c7e91ab</entry>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <entry name="uuid">c7c7e2ca-e694-465f-941e-15513c7e91ab</entry>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <entry name="family">Virtual Machine</entry>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     </system>
Oct 09 09:59:09 compute-2 nova_compute[163961]:   </sysinfo>
Oct 09 09:59:09 compute-2 nova_compute[163961]:   <os>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <type arch="x86_64" machine="q35">hvm</type>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <boot dev="hd"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <smbios mode="sysinfo"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:   </os>
Oct 09 09:59:09 compute-2 nova_compute[163961]:   <features>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <acpi/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <apic/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <vmcoreinfo/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:   </features>
Oct 09 09:59:09 compute-2 nova_compute[163961]:   <clock offset="utc">
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <timer name="pit" tickpolicy="delay"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <timer name="rtc" tickpolicy="catchup"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <timer name="hpet" present="no"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:   </clock>
Oct 09 09:59:09 compute-2 nova_compute[163961]:   <cpu mode="host-model" match="exact">
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <topology sockets="1" cores="1" threads="1"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:   </cpu>
Oct 09 09:59:09 compute-2 nova_compute[163961]:   <devices>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <disk type="network" device="disk">
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <driver type="raw" cache="none"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <source protocol="rbd" name="vms/c7c7e2ca-e694-465f-941e-15513c7e91ab_disk">
Oct 09 09:59:09 compute-2 nova_compute[163961]:         <host name="192.168.122.100" port="6789"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:         <host name="192.168.122.102" port="6789"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:         <host name="192.168.122.101" port="6789"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       </source>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <auth username="openstack">
Oct 09 09:59:09 compute-2 nova_compute[163961]:         <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       </auth>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <target dev="vda" bus="virtio"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     </disk>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <disk type="network" device="cdrom">
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <driver type="raw" cache="none"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <source protocol="rbd" name="vms/c7c7e2ca-e694-465f-941e-15513c7e91ab_disk.config">
Oct 09 09:59:09 compute-2 nova_compute[163961]:         <host name="192.168.122.100" port="6789"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:         <host name="192.168.122.102" port="6789"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:         <host name="192.168.122.101" port="6789"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       </source>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <auth username="openstack">
Oct 09 09:59:09 compute-2 nova_compute[163961]:         <secret type="ceph" uuid="286f8bf0-da72-5823-9a4e-ac4457d9e609"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       </auth>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <target dev="sda" bus="sata"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     </disk>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <interface type="ethernet">
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <mac address="fa:16:3e:d9:1b:2f"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <model type="virtio"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <driver name="vhost" rx_queue_size="512"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <mtu size="1442"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <target dev="tap55484b13-54"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     </interface>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <serial type="pty">
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <log file="/var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/console.log" append="off"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     </serial>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <video>
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <model type="virtio"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     </video>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <input type="tablet" bus="usb"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <rng model="virtio">
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <backend model="random">/dev/urandom</backend>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     </rng>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="pci" model="pcie-root-port"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <controller type="usb" index="0"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     <memballoon model="virtio">
Oct 09 09:59:09 compute-2 nova_compute[163961]:       <stats period="10"/>
Oct 09 09:59:09 compute-2 nova_compute[163961]:     </memballoon>
Oct 09 09:59:09 compute-2 nova_compute[163961]:   </devices>
Oct 09 09:59:09 compute-2 nova_compute[163961]: </domain>
Oct 09 09:59:09 compute-2 nova_compute[163961]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.230 2 DEBUG nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Preparing to wait for external event network-vif-plugged-55484b13-541c-4895-beab-bdcdaa30f4fe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.230 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.230 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.231 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.231 2 DEBUG nova.virt.libvirt.vif [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-09T09:59:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1164663661',display_name='tempest-TestNetworkBasicOps-server-1164663661',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1164663661',id=6,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJekZCUuyZFfRi4sqQ/mP7Ozivo49QKXFHHjMUzJNdIpXHKQgOnPPcVpZjnx45IP0IUXYjxjP4OCv7gqvDPFNQ0nZIMIyF69sokT4DnjnPbGTb16o+q+6RbNVaDlRNZ6mw==',key_name='tempest-TestNetworkBasicOps-1319761674',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-mxb6tzm8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-09T09:59:02Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=c7c7e2ca-e694-465f-941e-15513c7e91ab,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.231 2 DEBUG nova.network.os_vif_util [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.232 2 DEBUG nova.network.os_vif_util [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d9:1b:2f,bridge_name='br-int',has_traffic_filtering=True,id=55484b13-541c-4895-beab-bdcdaa30f4fe,network=Network(ab21f371-26e2-4c4f-bba0-3c44fb308723),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55484b13-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.232 2 DEBUG os_vif [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:1b:2f,bridge_name='br-int',has_traffic_filtering=True,id=55484b13-541c-4895-beab-bdcdaa30f4fe,network=Network(ab21f371-26e2-4c4f-bba0-3c44fb308723),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55484b13-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.259 2 DEBUG ovsdbapp.backend.ovs_idl [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.259 2 DEBUG ovsdbapp.backend.ovs_idl [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.260 2 DEBUG ovsdbapp.backend.ovs_idl [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.271 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.272 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.272 2 INFO oslo.privsep.daemon [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpy23ocg_f/privsep.sock']
Oct 09 09:59:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2385607545' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:59:09 compute-2 ceph-mon[5983]: pgmap v780: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Oct 09 09:59:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2240393634' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:59:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:09.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.820 2 INFO oslo.privsep.daemon [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Spawned new privsep daemon via rootwrap
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.734 698 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.738 698 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.739 698 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Oct 09 09:59:09 compute-2 nova_compute[163961]: 2025-10-09 09:59:09.740 698 INFO oslo.privsep.daemon [-] privsep daemon running as pid 698
Oct 09 09:59:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:09 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.074 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55484b13-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.075 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap55484b13-54, col_values=(('external_ids', {'iface-id': '55484b13-541c-4895-beab-bdcdaa30f4fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d9:1b:2f', 'vm-uuid': 'c7c7e2ca-e694-465f-941e-15513c7e91ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:10 compute-2 NetworkManager[984]: <info>  [1760003950.0772] manager: (tap55484b13-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.082 2 INFO os_vif [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d9:1b:2f,bridge_name='br-int',has_traffic_filtering=True,id=55484b13-541c-4895-beab-bdcdaa30f4fe,network=Network(ab21f371-26e2-4c4f-bba0-3c44fb308723),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55484b13-54')
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.116 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.116 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.116 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:d9:1b:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.117 2 INFO nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Using config drive
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.133 2 DEBUG nova.storage.rbd_utils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7c7e2ca-e694-465f-941e-15513c7e91ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:59:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:10.277 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:10.278 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:10.278 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.318 2 INFO nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Creating config drive at /var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/disk.config
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.322 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp63p3tami execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.443 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp63p3tami" returned: 0 in 0.121s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:59:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:10.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.465 2 DEBUG nova.storage.rbd_utils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] rbd image c7c7e2ca-e694-465f-941e-15513c7e91ab_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.468 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/disk.config c7c7e2ca-e694-465f-941e-15513c7e91ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.551 2 DEBUG oslo_concurrency.processutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/disk.config c7c7e2ca-e694-465f-941e-15513c7e91ab_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.552 2 INFO nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Deleting local config drive /var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/disk.config because it was imported into RBD.
Oct 09 09:59:10 compute-2 systemd[1]: Starting libvirt secret daemon...
Oct 09 09:59:10 compute-2 systemd[1]: Started libvirt secret daemon.
Oct 09 09:59:10 compute-2 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct 09 09:59:10 compute-2 kernel: tap55484b13-54: entered promiscuous mode
Oct 09 09:59:10 compute-2 NetworkManager[984]: <info>  [1760003950.6328] manager: (tap55484b13-54): new Tun device (/org/freedesktop/NetworkManager/Devices/26)
Oct 09 09:59:10 compute-2 ovn_controller[62794]: 2025-10-09T09:59:10Z|00027|binding|INFO|Claiming lport 55484b13-541c-4895-beab-bdcdaa30f4fe for this chassis.
Oct 09 09:59:10 compute-2 ovn_controller[62794]: 2025-10-09T09:59:10Z|00028|binding|INFO|55484b13-541c-4895-beab-bdcdaa30f4fe: Claiming fa:16:3e:d9:1b:2f 10.100.0.6
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:10.645 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:1b:2f 10.100.0.6'], port_security=['fa:16:3e:d9:1b:2f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c7c7e2ca-e694-465f-941e-15513c7e91ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '72489230-c514-4cf9-bf1c-35e063204738', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed655dd9-bb73-453e-8a8b-a0dd965263b3, chassis=[<ovs.db.idl.Row object at 0x7f38807e66d0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f38807e66d0>], logical_port=55484b13-541c-4895-beab-bdcdaa30f4fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:59:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:10.647 71793 INFO neutron.agent.ovn.metadata.agent [-] Port 55484b13-541c-4895-beab-bdcdaa30f4fe in datapath ab21f371-26e2-4c4f-bba0-3c44fb308723 bound to our chassis
Oct 09 09:59:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:10.648 71793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ab21f371-26e2-4c4f-bba0-3c44fb308723
Oct 09 09:59:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:10.649 71793 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp_7t9jm42/privsep.sock']
Oct 09 09:59:10 compute-2 systemd-udevd[168161]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:59:10 compute-2 NetworkManager[984]: <info>  [1760003950.6757] device (tap55484b13-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 09:59:10 compute-2 NetworkManager[984]: <info>  [1760003950.6765] device (tap55484b13-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 09:59:10 compute-2 systemd-machined[121527]: New machine qemu-1-instance-00000006.
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:10 compute-2 systemd[1]: Started Virtual Machine qemu-1-instance-00000006.
Oct 09 09:59:10 compute-2 ovn_controller[62794]: 2025-10-09T09:59:10Z|00029|binding|INFO|Setting lport 55484b13-541c-4895-beab-bdcdaa30f4fe ovn-installed in OVS
Oct 09 09:59:10 compute-2 ovn_controller[62794]: 2025-10-09T09:59:10Z|00030|binding|INFO|Setting lport 55484b13-541c-4895-beab-bdcdaa30f4fe up in Southbound
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:10 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.992 2 DEBUG nova.compute.manager [req-f003dd25-afa4-468c-9925-79b2a2aa4168 req-f0d96e13-5827-4f62-8dc8-ec3a91fdf968 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-vif-plugged-55484b13-541c-4895-beab-bdcdaa30f4fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.992 2 DEBUG oslo_concurrency.lockutils [req-f003dd25-afa4-468c-9925-79b2a2aa4168 req-f0d96e13-5827-4f62-8dc8-ec3a91fdf968 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.993 2 DEBUG oslo_concurrency.lockutils [req-f003dd25-afa4-468c-9925-79b2a2aa4168 req-f0d96e13-5827-4f62-8dc8-ec3a91fdf968 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.993 2 DEBUG oslo_concurrency.lockutils [req-f003dd25-afa4-468c-9925-79b2a2aa4168 req-f0d96e13-5827-4f62-8dc8-ec3a91fdf968 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:10 compute-2 nova_compute[163961]: 2025-10-09 09:59:10.993 2 DEBUG nova.compute.manager [req-f003dd25-afa4-468c-9925-79b2a2aa4168 req-f0d96e13-5827-4f62-8dc8-ec3a91fdf968 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Processing event network-vif-plugged-55484b13-541c-4895-beab-bdcdaa30f4fe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.125 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.199 71793 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 09 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.199 71793 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp_7t9jm42/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 09 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.113 168221 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 09 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.116 168221 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 09 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.119 168221 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Oct 09 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.119 168221 INFO oslo.privsep.daemon [-] privsep daemon running as pid 168221
Oct 09 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.202 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[a67c1c37-b7e5-4cd6-80fd-712a2b9480b6]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.442 2 DEBUG nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.442 2 DEBUG nova.virt.driver [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Emitting event <LifecycleEvent: 1760003951.4415255, c7c7e2ca-e694-465f-941e-15513c7e91ab => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.443 2 INFO nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] VM Started (Lifecycle Event)
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.445 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.452 2 INFO nova.virt.libvirt.driver [-] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Instance spawned successfully.
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.452 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.461 2 DEBUG nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.462 2 DEBUG nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.468 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.468 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.469 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.469 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.469 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.469 2 DEBUG nova.virt.libvirt.driver [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.475 2 INFO nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.476 2 DEBUG nova.virt.driver [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Emitting event <LifecycleEvent: 1760003951.4447737, c7c7e2ca-e694-465f-941e-15513c7e91ab => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.476 2 INFO nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] VM Paused (Lifecycle Event)
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.501 2 DEBUG nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.503 2 DEBUG nova.virt.driver [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Emitting event <LifecycleEvent: 1760003951.4453082, c7c7e2ca-e694-465f-941e-15513c7e91ab => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.503 2 INFO nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] VM Resumed (Lifecycle Event)
Oct 09 09:59:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:11.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.523 2 DEBUG nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.525 2 DEBUG nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.528 2 INFO nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Took 8.46 seconds to spawn the instance on the hypervisor.
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.529 2 DEBUG nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.547 2 INFO nova.compute.manager [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] During sync_power_state the instance has a pending task (spawning). Skip.
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.577 2 INFO nova.compute.manager [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Took 9.23 seconds to build instance.
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.587 2 DEBUG oslo_concurrency.lockutils [None req-6448524f-cc0d-4811-a3e1-294c1608ff7e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.713 168221 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.714 168221 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:11 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:11.714 168221 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:11 compute-2 nova_compute[163961]: 2025-10-09 09:59:11.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 09 09:59:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/282215734' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:59:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 09 09:59:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/282215734' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:59:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:11 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:12 compute-2 ceph-mon[5983]: pgmap v781: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 40 op/s
Oct 09 09:59:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/282215734' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 09:59:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/282215734' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 09:59:12 compute-2 sudo[168227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:59:12 compute-2 sudo[168227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:59:12 compute-2 sudo[168227]: pam_unix(sudo:session): session closed for user root
Oct 09 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.307 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[ffa69bee-bb96-41d9-8ad7-fb2ae03345ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.308 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapab21f371-21 in ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 09 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.310 168221 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapab21f371-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 09 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.310 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[7a590b71-d8b8-4040-bb56-1ae69c852888]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.314 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[f2eda36a-a2a1-457a-b8b7-11f1dac546a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.332 72006 DEBUG oslo.privsep.daemon [-] privsep: reply[177f40f9-ac91-41fe-bbd9-ac1915933ab3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.346 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[699e104a-764f-49f5-b4bd-859a4c6102ee]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.349 71793 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpelyc0ali/privsep.sock']
Oct 09 09:59:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:12.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:12 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.955 71793 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Oct 09 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.956 71793 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpelyc0ali/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Oct 09 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.864 168262 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 09 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.867 168262 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 09 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.869 168262 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 09 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.869 168262 INFO oslo.privsep.daemon [-] privsep daemon running as pid 168262
Oct 09 09:59:12 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:12.959 168262 DEBUG oslo.privsep.daemon [-] privsep: reply[770165c7-d565-4eb6-9191-42f5e90ed1d8]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:59:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:59:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:59:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:59:13 compute-2 nova_compute[163961]: 2025-10-09 09:59:13.072 2 DEBUG nova.compute.manager [req-1aeda0d1-4dd7-47d4-a0e3-624c258584bf req-844ec2ff-697c-49d4-b60c-5569172dc23a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-vif-plugged-55484b13-541c-4895-beab-bdcdaa30f4fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:59:13 compute-2 nova_compute[163961]: 2025-10-09 09:59:13.072 2 DEBUG oslo_concurrency.lockutils [req-1aeda0d1-4dd7-47d4-a0e3-624c258584bf req-844ec2ff-697c-49d4-b60c-5569172dc23a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:13 compute-2 nova_compute[163961]: 2025-10-09 09:59:13.073 2 DEBUG oslo_concurrency.lockutils [req-1aeda0d1-4dd7-47d4-a0e3-624c258584bf req-844ec2ff-697c-49d4-b60c-5569172dc23a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:13 compute-2 nova_compute[163961]: 2025-10-09 09:59:13.073 2 DEBUG oslo_concurrency.lockutils [req-1aeda0d1-4dd7-47d4-a0e3-624c258584bf req-844ec2ff-697c-49d4-b60c-5569172dc23a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:13 compute-2 nova_compute[163961]: 2025-10-09 09:59:13.073 2 DEBUG nova.compute.manager [req-1aeda0d1-4dd7-47d4-a0e3-624c258584bf req-844ec2ff-697c-49d4-b60c-5569172dc23a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] No waiting events found dispatching network-vif-plugged-55484b13-541c-4895-beab-bdcdaa30f4fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 09:59:13 compute-2 nova_compute[163961]: 2025-10-09 09:59:13.074 2 WARNING nova.compute.manager [req-1aeda0d1-4dd7-47d4-a0e3-624c258584bf req-844ec2ff-697c-49d4-b60c-5569172dc23a b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received unexpected event network-vif-plugged-55484b13-541c-4895-beab-bdcdaa30f4fe for instance with vm_state active and task_state None.
Oct 09 09:59:13 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:13.442 168262 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:13 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:13.442 168262 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:13 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:13.442 168262 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:13.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:13 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:13 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:13.965 168262 DEBUG oslo.privsep.daemon [-] privsep: reply[4bac90c4-67d8-4889-93e4-8fce1d92d142]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:13 compute-2 NetworkManager[984]: <info>  [1760003953.9780] manager: (tapab21f371-20): new Veth device (/org/freedesktop/NetworkManager/Devices/27)
Oct 09 09:59:13 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:13.977 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[0afc0e28-461e-47c7-8f12-bc75926b94d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:13 compute-2 systemd-udevd[168273]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.012 168262 DEBUG oslo.privsep.daemon [-] privsep: reply[c1062481-c188-437f-ad22-2fc7fdc3a860]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.015 168262 DEBUG oslo.privsep.daemon [-] privsep: reply[98658a6f-da66-46bd-a7dd-c090b6f29ec3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:14 compute-2 NetworkManager[984]: <info>  [1760003954.0310] device (tapab21f371-20): carrier: link connected
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.034 168262 DEBUG oslo.privsep.daemon [-] privsep: reply[9084ce1e-5065-4285-8968-753cc8e08eea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.055 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[4de7eaf4-9a61-4865-8f1e-2e6a24a45d20]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapab21f371-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:89:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 163846, 'reachable_time': 26932, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 168284, 'error': None, 'target': 'ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.064 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[167168ba-ff62-4fc6-9bd4-624d262d8cb3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe77:895b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 163846, 'tstamp': 163846}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 168286, 'error': None, 'target': 'ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.075 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[697fa369-ad2d-43bf-9221-4614a5bda2b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapab21f371-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:77:89:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 20], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 163846, 'reachable_time': 26932, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 168287, 'error': None, 'target': 'ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:14 compute-2 ceph-mon[5983]: pgmap v782: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.099 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[e1de30d1-a174-4374-bcf4-b28fdd37fe3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.143 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[e7237359-5a77-46c3-8ed2-ebc8e16acb85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.145 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab21f371-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.146 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.146 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapab21f371-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:59:14 compute-2 NetworkManager[984]: <info>  [1760003954.1497] manager: (tapab21f371-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct 09 09:59:14 compute-2 kernel: tapab21f371-20: entered promiscuous mode
Oct 09 09:59:14 compute-2 nova_compute[163961]: 2025-10-09 09:59:14.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.157 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapab21f371-20, col_values=(('external_ids', {'iface-id': '188102c6-f5ba-4733-92be-2659db7ae55a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:59:14 compute-2 nova_compute[163961]: 2025-10-09 09:59:14.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:14 compute-2 ovn_controller[62794]: 2025-10-09T09:59:14Z|00031|binding|INFO|Releasing lport 188102c6-f5ba-4733-92be-2659db7ae55a from this chassis (sb_readonly=0)
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.162 71793 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ab21f371-26e2-4c4f-bba0-3c44fb308723.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ab21f371-26e2-4c4f-bba0-3c44fb308723.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.163 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[045c9300-7082-436d-b82f-4abc4764b664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.164 71793 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: global
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     log         /dev/log local0 debug
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     log-tag     haproxy-metadata-proxy-ab21f371-26e2-4c4f-bba0-3c44fb308723
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     user        root
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     group       root
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     maxconn     1024
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     pidfile     /var/lib/neutron/external/pids/ab21f371-26e2-4c4f-bba0-3c44fb308723.pid.haproxy
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     daemon
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: 
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: defaults
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     log global
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     mode http
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     option httplog
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     option dontlognull
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     option http-server-close
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     option forwardfor
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     retries                 3
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     timeout http-request    30s
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     timeout connect         30s
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     timeout client          32s
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     timeout server          32s
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     timeout http-keep-alive 30s
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: 
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: 
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: listen listener
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     bind 169.254.169.254:80
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:     http-request add-header X-OVN-Network-ID ab21f371-26e2-4c4f-bba0-3c44fb308723
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.165 71793 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'env', 'PROCESS_TAG=haproxy-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ab21f371-26e2-4c4f-bba0-3c44fb308723.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 09 09:59:14 compute-2 nova_compute[163961]: 2025-10-09 09:59:14.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:14.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:14 compute-2 podman[168317]: 2025-10-09 09:59:14.470290658 +0000 UTC m=+0.037607971 container create 53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001)
Oct 09 09:59:14 compute-2 systemd[1]: Started libpod-conmon-53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc.scope.
Oct 09 09:59:14 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:59:14 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3b78e34eb4bd48f615c51f92b1c60c1faa9ea89e7ed53520625d65534f9f4de/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 09:59:14 compute-2 podman[168317]: 2025-10-09 09:59:14.524124013 +0000 UTC m=+0.091441336 container init 53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct 09 09:59:14 compute-2 podman[168317]: 2025-10-09 09:59:14.528622192 +0000 UTC m=+0.095939505 container start 53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 09 09:59:14 compute-2 podman[168317]: 2025-10-09 09:59:14.450676386 +0000 UTC m=+0.017993719 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 09 09:59:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:14 compute-2 neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723[168329]: [NOTICE]   (168333) : New worker (168335) forked
Oct 09 09:59:14 compute-2 neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723[168329]: [NOTICE]   (168333) : Loading success.
Oct 09 09:59:14 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:14.567 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 09:59:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:14 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:15 compute-2 ovn_controller[62794]: 2025-10-09T09:59:15Z|00032|binding|INFO|Releasing lport 188102c6-f5ba-4733-92be-2659db7ae55a from this chassis (sb_readonly=0)
Oct 09 09:59:15 compute-2 NetworkManager[984]: <info>  [1760003955.0643] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/29)
Oct 09 09:59:15 compute-2 NetworkManager[984]: <info>  [1760003955.0645] device (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:59:15 compute-2 NetworkManager[984]: <info>  [1760003955.0653] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/30)
Oct 09 09:59:15 compute-2 NetworkManager[984]: <info>  [1760003955.0655] device (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct 09 09:59:15 compute-2 NetworkManager[984]: <info>  [1760003955.0660] manager: (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Oct 09 09:59:15 compute-2 NetworkManager[984]: <info>  [1760003955.0662] manager: (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Oct 09 09:59:15 compute-2 NetworkManager[984]: <info>  [1760003955.0664] device (patch-br-int-to-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:59:15 compute-2 NetworkManager[984]: <info>  [1760003955.0666] device (patch-provnet-ceb5df48-9471-46cc-b494-923d3260d7ae-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct 09 09:59:15 compute-2 nova_compute[163961]: 2025-10-09 09:59:15.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:15 compute-2 nova_compute[163961]: 2025-10-09 09:59:15.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:15 compute-2 ovn_controller[62794]: 2025-10-09T09:59:15Z|00033|binding|INFO|Releasing lport 188102c6-f5ba-4733-92be-2659db7ae55a from this chassis (sb_readonly=0)
Oct 09 09:59:15 compute-2 nova_compute[163961]: 2025-10-09 09:59:15.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:15 compute-2 sudo[168340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:59:15 compute-2 sudo[168340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:59:15 compute-2 sudo[168340]: pam_unix(sudo:session): session closed for user root
Oct 09 09:59:15 compute-2 sudo[168365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 09:59:15 compute-2 sudo[168365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:59:15 compute-2 nova_compute[163961]: 2025-10-09 09:59:15.252 2 DEBUG nova.compute.manager [req-ab5d927e-6585-470e-8929-1ee24fefd075 req-072b7398-01ee-4561-bcfd-c64daa9a9d0e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-changed-55484b13-541c-4895-beab-bdcdaa30f4fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:59:15 compute-2 nova_compute[163961]: 2025-10-09 09:59:15.253 2 DEBUG nova.compute.manager [req-ab5d927e-6585-470e-8929-1ee24fefd075 req-072b7398-01ee-4561-bcfd-c64daa9a9d0e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Refreshing instance network info cache due to event network-changed-55484b13-541c-4895-beab-bdcdaa30f4fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 09:59:15 compute-2 nova_compute[163961]: 2025-10-09 09:59:15.253 2 DEBUG oslo_concurrency.lockutils [req-ab5d927e-6585-470e-8929-1ee24fefd075 req-072b7398-01ee-4561-bcfd-c64daa9a9d0e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:59:15 compute-2 nova_compute[163961]: 2025-10-09 09:59:15.254 2 DEBUG oslo_concurrency.lockutils [req-ab5d927e-6585-470e-8929-1ee24fefd075 req-072b7398-01ee-4561-bcfd-c64daa9a9d0e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:59:15 compute-2 nova_compute[163961]: 2025-10-09 09:59:15.254 2 DEBUG nova.network.neutron [req-ab5d927e-6585-470e-8929-1ee24fefd075 req-072b7398-01ee-4561-bcfd-c64daa9a9d0e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Refreshing network info cache for port 55484b13-541c-4895-beab-bdcdaa30f4fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 09:59:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:15.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:15 compute-2 podman[168446]: 2025-10-09 09:59:15.651500603 +0000 UTC m=+0.039444863 container exec 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 09:59:15 compute-2 podman[168446]: 2025-10-09 09:59:15.736136741 +0000 UTC m=+0.124080991 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Oct 09 09:59:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:15 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:15 compute-2 podman[168526]: 2025-10-09 09:59:15.994984273 +0000 UTC m=+0.035101717 container exec 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:59:16 compute-2 podman[168526]: 2025-10-09 09:59:16.000520428 +0000 UTC m=+0.040637893 container exec_died 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 09:59:16 compute-2 ceph-mon[5983]: pgmap v783: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Oct 09 09:59:16 compute-2 podman[168628]: 2025-10-09 09:59:16.349270345 +0000 UTC m=+0.045667504 container exec 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct 09 09:59:16 compute-2 podman[168628]: 2025-10-09 09:59:16.360056046 +0000 UTC m=+0.056453185 container exec_died 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct 09 09:59:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:16.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:16 compute-2 podman[168680]: 2025-10-09 09:59:16.514009975 +0000 UTC m=+0.045222714 container exec a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, distribution-scope=public, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, name=keepalived, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, version=2.2.4, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, vcs-type=git, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph.)
Oct 09 09:59:16 compute-2 podman[168680]: 2025-10-09 09:59:16.520911905 +0000 UTC m=+0.052124644 container exec_died a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1793, vcs-type=git, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=keepalived-container, name=keepalived, io.buildah.version=1.28.2, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>)
Oct 09 09:59:16 compute-2 podman[168722]: 2025-10-09 09:59:16.648876947 +0000 UTC m=+0.046081393 container exec 497c7afc8fec44ce46000a7251f8bab138912e15672ce0c2da150a022a264c99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 09 09:59:16 compute-2 podman[168722]: 2025-10-09 09:59:16.663103513 +0000 UTC m=+0.060307949 container exec_died 497c7afc8fec44ce46000a7251f8bab138912e15672ce0c2da150a022a264c99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Oct 09 09:59:16 compute-2 nova_compute[163961]: 2025-10-09 09:59:16.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:16 compute-2 sudo[168365]: pam_unix(sudo:session): session closed for user root
Oct 09 09:59:16 compute-2 sudo[168779]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 09:59:16 compute-2 sudo[168779]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:59:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:16 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:16 compute-2 sudo[168779]: pam_unix(sudo:session): session closed for user root
Oct 09 09:59:16 compute-2 sudo[168804]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 09:59:16 compute-2 sudo[168804]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:59:17 compute-2 nova_compute[163961]: 2025-10-09 09:59:17.173 2 DEBUG nova.network.neutron [req-ab5d927e-6585-470e-8929-1ee24fefd075 req-072b7398-01ee-4561-bcfd-c64daa9a9d0e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updated VIF entry in instance network info cache for port 55484b13-541c-4895-beab-bdcdaa30f4fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 09 09:59:17 compute-2 nova_compute[163961]: 2025-10-09 09:59:17.174 2 DEBUG nova.network.neutron [req-ab5d927e-6585-470e-8929-1ee24fefd075 req-072b7398-01ee-4561-bcfd-c64daa9a9d0e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updating instance_info_cache with network_info: [{"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:59:17 compute-2 nova_compute[163961]: 2025-10-09 09:59:17.188 2 DEBUG oslo_concurrency.lockutils [req-ab5d927e-6585-470e-8929-1ee24fefd075 req-072b7398-01ee-4561-bcfd-c64daa9a9d0e b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:59:17 compute-2 sudo[168804]: pam_unix(sudo:session): session closed for user root
Oct 09 09:59:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:59:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:17.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:59:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:59:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:59:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:59:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:59:17 compute-2 ceph-mon[5983]: pgmap v784: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct 09 09:59:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:59:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 09:59:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:59:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:59:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 09:59:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 09:59:17 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 09:59:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:17 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:59:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:59:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:59:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:59:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:18.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:18 compute-2 ceph-mon[5983]: pgmap v785: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 15 KiB/s wr, 86 op/s
Oct 09 09:59:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:18 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:19 compute-2 podman[168860]: 2025-10-09 09:59:19.220362893 +0000 UTC m=+0.048738212 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible)
Oct 09 09:59:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:19.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:59:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:19 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:20 compute-2 nova_compute[163961]: 2025-10-09 09:59:20.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:20.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:20 compute-2 ceph-mon[5983]: pgmap v786: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 15 KiB/s wr, 86 op/s
Oct 09 09:59:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:20 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:21 compute-2 sudo[168881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 09:59:21 compute-2 sudo[168881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:59:21 compute-2 sudo[168881]: pam_unix(sudo:session): session closed for user root
Oct 09 09:59:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:21.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:21 compute-2 nova_compute[163961]: 2025-10-09 09:59:21.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:21 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:59:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 09:59:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:22.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:22 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:59:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:59:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:59:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:59:23 compute-2 ovn_controller[62794]: 2025-10-09T09:59:23Z|00004|pinctrl(ovn_pinctrl1)|INFO|DHCPOFFER fa:16:3e:d9:1b:2f 10.100.0.6
Oct 09 09:59:23 compute-2 ovn_controller[62794]: 2025-10-09T09:59:23Z|00005|pinctrl(ovn_pinctrl1)|INFO|DHCPACK fa:16:3e:d9:1b:2f 10.100.0.6
Oct 09 09:59:23 compute-2 ceph-mon[5983]: pgmap v787: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 80 op/s
Oct 09 09:59:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:23.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:23 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:23.571 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:59:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:23 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:24.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:24 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:25 compute-2 nova_compute[163961]: 2025-10-09 09:59:25.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:25 compute-2 ceph-mon[5983]: pgmap v788: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 80 op/s
Oct 09 09:59:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:25.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:25 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:26.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:26 compute-2 nova_compute[163961]: 2025-10-09 09:59:26.772 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:26 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:27 compute-2 ceph-mon[5983]: pgmap v789: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 80 op/s
Oct 09 09:59:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:27.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:27 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:59:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:59:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:59:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:59:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:59:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:28.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:59:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:28 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:29 compute-2 ceph-mon[5983]: pgmap v790: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 316 KiB/s rd, 2.5 MiB/s wr, 71 op/s
Oct 09 09:59:29 compute-2 nova_compute[163961]: 2025-10-09 09:59:29.531 2 INFO nova.compute.manager [None req-c187f1ed-fe6f-4361-975d-077c94a33df4 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Get console output
Oct 09 09:59:29 compute-2 nova_compute[163961]: 2025-10-09 09:59:29.535 2 INFO oslo.privsep.daemon [None req-c187f1ed-fe6f-4361-975d-077c94a33df4 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp1s6af3x0/privsep.sock']
Oct 09 09:59:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:29.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:29 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:30 compute-2 nova_compute[163961]: 2025-10-09 09:59:30.073 2 INFO oslo.privsep.daemon [None req-c187f1ed-fe6f-4361-975d-077c94a33df4 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Spawned new privsep daemon via rootwrap
Oct 09 09:59:30 compute-2 nova_compute[163961]: 2025-10-09 09:59:29.989 764 INFO oslo.privsep.daemon [-] privsep daemon starting
Oct 09 09:59:30 compute-2 nova_compute[163961]: 2025-10-09 09:59:29.993 764 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Oct 09 09:59:30 compute-2 nova_compute[163961]: 2025-10-09 09:59:29.995 764 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Oct 09 09:59:30 compute-2 nova_compute[163961]: 2025-10-09 09:59:29.995 764 INFO oslo.privsep.daemon [-] privsep daemon running as pid 764
Oct 09 09:59:30 compute-2 nova_compute[163961]: 2025-10-09 09:59:30.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:30 compute-2 nova_compute[163961]: 2025-10-09 09:59:30.151 764 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes
Oct 09 09:59:30 compute-2 podman[168922]: 2025-10-09 09:59:30.201679357 +0000 UTC m=+0.034994434 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:59:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:30.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:30 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:31 compute-2 ceph-mon[5983]: pgmap v791: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct 09 09:59:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:31.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:31 compute-2 nova_compute[163961]: 2025-10-09 09:59:31.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:31 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:32 compute-2 podman[168940]: 2025-10-09 09:59:32.211438819 +0000 UTC m=+0.043777802 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2)
Oct 09 09:59:32 compute-2 sudo[168949]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:59:32 compute-2 sudo[168949]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:59:32 compute-2 sudo[168949]: pam_unix(sudo:session): session closed for user root
Oct 09 09:59:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:32.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:32 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:59:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:59:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:59:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:59:33 compute-2 ceph-mon[5983]: pgmap v792: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 09 09:59:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:33.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:33 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:34 compute-2 nova_compute[163961]: 2025-10-09 09:59:34.142 2 DEBUG oslo_concurrency.lockutils [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "interface-c7c7e2ca-e694-465f-941e-15513c7e91ab-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:34 compute-2 nova_compute[163961]: 2025-10-09 09:59:34.143 2 DEBUG oslo_concurrency.lockutils [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "interface-c7c7e2ca-e694-465f-941e-15513c7e91ab-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:34 compute-2 nova_compute[163961]: 2025-10-09 09:59:34.143 2 DEBUG nova.objects.instance [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'flavor' on Instance uuid c7c7e2ca-e694-465f-941e-15513c7e91ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:59:34 compute-2 nova_compute[163961]: 2025-10-09 09:59:34.353 2 DEBUG nova.objects.instance [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'pci_requests' on Instance uuid c7c7e2ca-e694-465f-941e-15513c7e91ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:59:34 compute-2 nova_compute[163961]: 2025-10-09 09:59:34.363 2 DEBUG nova.network.neutron [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Oct 09 09:59:34 compute-2 nova_compute[163961]: 2025-10-09 09:59:34.472 2 DEBUG nova.policy [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2351e05157514d1995a1ea4151d12fee', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Oct 09 09:59:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:34.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:34 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:35 compute-2 nova_compute[163961]: 2025-10-09 09:59:35.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:35 compute-2 ceph-mon[5983]: pgmap v793: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 09 09:59:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:59:35 compute-2 nova_compute[163961]: 2025-10-09 09:59:35.293 2 DEBUG nova.network.neutron [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Successfully created port: 8bfb9190-a455-483f-a18f-f65db3220f30 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Oct 09 09:59:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:35.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:35 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:36 compute-2 nova_compute[163961]: 2025-10-09 09:59:36.247 2 DEBUG nova.network.neutron [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Successfully updated port: 8bfb9190-a455-483f-a18f-f65db3220f30 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Oct 09 09:59:36 compute-2 nova_compute[163961]: 2025-10-09 09:59:36.258 2 DEBUG oslo_concurrency.lockutils [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:59:36 compute-2 nova_compute[163961]: 2025-10-09 09:59:36.258 2 DEBUG oslo_concurrency.lockutils [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:59:36 compute-2 nova_compute[163961]: 2025-10-09 09:59:36.259 2 DEBUG nova.network.neutron [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 09 09:59:36 compute-2 nova_compute[163961]: 2025-10-09 09:59:36.321 2 DEBUG nova.compute.manager [req-998b156a-9be4-47e3-965c-0a26e1d151b5 req-15cd7b4c-6e7c-49e4-a002-57a07bbdc007 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-changed-8bfb9190-a455-483f-a18f-f65db3220f30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:59:36 compute-2 nova_compute[163961]: 2025-10-09 09:59:36.321 2 DEBUG nova.compute.manager [req-998b156a-9be4-47e3-965c-0a26e1d151b5 req-15cd7b4c-6e7c-49e4-a002-57a07bbdc007 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Refreshing instance network info cache due to event network-changed-8bfb9190-a455-483f-a18f-f65db3220f30. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 09:59:36 compute-2 nova_compute[163961]: 2025-10-09 09:59:36.321 2 DEBUG oslo_concurrency.lockutils [req-998b156a-9be4-47e3-965c-0a26e1d151b5 req-15cd7b4c-6e7c-49e4-a002-57a07bbdc007 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:59:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:36.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:36 compute-2 nova_compute[163961]: 2025-10-09 09:59:36.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:36 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:37 compute-2 ceph-mon[5983]: pgmap v794: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 09 09:59:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:37.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.672 2 DEBUG nova.network.neutron [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updating instance_info_cache with network_info: [{"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.685 2 DEBUG oslo_concurrency.lockutils [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.685 2 DEBUG oslo_concurrency.lockutils [req-998b156a-9be4-47e3-965c-0a26e1d151b5 req-15cd7b4c-6e7c-49e4-a002-57a07bbdc007 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.685 2 DEBUG nova.network.neutron [req-998b156a-9be4-47e3-965c-0a26e1d151b5 req-15cd7b4c-6e7c-49e4-a002-57a07bbdc007 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Refreshing network info cache for port 8bfb9190-a455-483f-a18f-f65db3220f30 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.688 2 DEBUG nova.virt.libvirt.vif [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:59:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1164663661',display_name='tempest-TestNetworkBasicOps-server-1164663661',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1164663661',id=6,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJekZCUuyZFfRi4sqQ/mP7Ozivo49QKXFHHjMUzJNdIpXHKQgOnPPcVpZjnx45IP0IUXYjxjP4OCv7gqvDPFNQ0nZIMIyF69sokT4DnjnPbGTb16o+q+6RbNVaDlRNZ6mw==',key_name='tempest-TestNetworkBasicOps-1319761674',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:59:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-mxb6tzm8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:59:11Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=c7c7e2ca-e694-465f-941e-15513c7e91ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.688 2 DEBUG nova.network.os_vif_util [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.688 2 DEBUG nova.network.os_vif_util [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:80:50,bridge_name='br-int',has_traffic_filtering=True,id=8bfb9190-a455-483f-a18f-f65db3220f30,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bfb9190-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.689 2 DEBUG os_vif [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:80:50,bridge_name='br-int',has_traffic_filtering=True,id=8bfb9190-a455-483f-a18f-f65db3220f30,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bfb9190-a4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.690 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.690 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.692 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8bfb9190-a4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.693 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8bfb9190-a4, col_values=(('external_ids', {'iface-id': '8bfb9190-a455-483f-a18f-f65db3220f30', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:79:80:50', 'vm-uuid': 'c7c7e2ca-e694-465f-941e-15513c7e91ab'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:59:37 compute-2 NetworkManager[984]: <info>  [1760003977.6952] manager: (tap8bfb9190-a4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.700 2 INFO os_vif [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:79:80:50,bridge_name='br-int',has_traffic_filtering=True,id=8bfb9190-a455-483f-a18f-f65db3220f30,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bfb9190-a4')
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.701 2 DEBUG nova.virt.libvirt.vif [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:59:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1164663661',display_name='tempest-TestNetworkBasicOps-server-1164663661',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1164663661',id=6,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJekZCUuyZFfRi4sqQ/mP7Ozivo49QKXFHHjMUzJNdIpXHKQgOnPPcVpZjnx45IP0IUXYjxjP4OCv7gqvDPFNQ0nZIMIyF69sokT4DnjnPbGTb16o+q+6RbNVaDlRNZ6mw==',key_name='tempest-TestNetworkBasicOps-1319761674',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:59:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-mxb6tzm8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:59:11Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=c7c7e2ca-e694-465f-941e-15513c7e91ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.701 2 DEBUG nova.network.os_vif_util [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.701 2 DEBUG nova.network.os_vif_util [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:79:80:50,bridge_name='br-int',has_traffic_filtering=True,id=8bfb9190-a455-483f-a18f-f65db3220f30,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bfb9190-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.703 2 DEBUG nova.virt.libvirt.guest [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] attach device xml: <interface type="ethernet">
Oct 09 09:59:37 compute-2 nova_compute[163961]:   <mac address="fa:16:3e:79:80:50"/>
Oct 09 09:59:37 compute-2 nova_compute[163961]:   <model type="virtio"/>
Oct 09 09:59:37 compute-2 nova_compute[163961]:   <driver name="vhost" rx_queue_size="512"/>
Oct 09 09:59:37 compute-2 nova_compute[163961]:   <mtu size="1442"/>
Oct 09 09:59:37 compute-2 nova_compute[163961]:   <target dev="tap8bfb9190-a4"/>
Oct 09 09:59:37 compute-2 nova_compute[163961]: </interface>
Oct 09 09:59:37 compute-2 nova_compute[163961]:  attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339
Oct 09 09:59:37 compute-2 kernel: tap8bfb9190-a4: entered promiscuous mode
Oct 09 09:59:37 compute-2 NetworkManager[984]: <info>  [1760003977.7108] manager: (tap8bfb9190-a4): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:37 compute-2 ovn_controller[62794]: 2025-10-09T09:59:37Z|00034|binding|INFO|Claiming lport 8bfb9190-a455-483f-a18f-f65db3220f30 for this chassis.
Oct 09 09:59:37 compute-2 ovn_controller[62794]: 2025-10-09T09:59:37Z|00035|binding|INFO|8bfb9190-a455-483f-a18f-f65db3220f30: Claiming fa:16:3e:79:80:50 10.100.0.28
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.723 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:80:50 10.100.0.28'], port_security=['fa:16:3e:79:80:50 10.100.0.28'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'c7c7e2ca-e694-465f-941e-15513c7e91ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '938aac20-7e1a-43e3-b950-0829bdd160e1', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a09146a-9f3c-432d-a7ac-1e34c91ed6bf, chassis=[<ovs.db.idl.Row object at 0x7f38807e66d0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f38807e66d0>], logical_port=8bfb9190-a455-483f-a18f-f65db3220f30) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.724 71793 INFO neutron.agent.ovn.metadata.agent [-] Port 8bfb9190-a455-483f-a18f-f65db3220f30 in datapath 4f792301-cf2d-455d-8ad6-8a55cc3146e9 bound to our chassis
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.725 71793 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4f792301-cf2d-455d-8ad6-8a55cc3146e9
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.733 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[bc2db4e1-6f9b-4f11-8486-ebdfd2443a6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.733 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4f792301-c1 in ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.735 168221 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4f792301-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.735 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[9aff95a9-91df-431b-9001-13a50cafbaae]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.737 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[fcb4ca50-d9bc-46ee-89bb-7766d76dc340]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:37 compute-2 systemd-udevd[168996]: Network interface NamePolicy= disabled on kernel command line.
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.751 72006 DEBUG oslo.privsep.daemon [-] privsep: reply[bf266666-f2ef-4e1b-861b-59795b76da6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:37 compute-2 NetworkManager[984]: <info>  [1760003977.7604] device (tap8bfb9190-a4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct 09 09:59:37 compute-2 NetworkManager[984]: <info>  [1760003977.7609] device (tap8bfb9190-a4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:37 compute-2 ovn_controller[62794]: 2025-10-09T09:59:37Z|00036|binding|INFO|Setting lport 8bfb9190-a455-483f-a18f-f65db3220f30 ovn-installed in OVS
Oct 09 09:59:37 compute-2 ovn_controller[62794]: 2025-10-09T09:59:37Z|00037|binding|INFO|Setting lport 8bfb9190-a455-483f-a18f-f65db3220f30 up in Southbound
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.766 2 DEBUG nova.virt.libvirt.driver [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.767 2 DEBUG nova.virt.libvirt.driver [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.767 2 DEBUG nova.virt.libvirt.driver [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:d9:1b:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.767 2 DEBUG nova.virt.libvirt.driver [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] No VIF found with MAC fa:16:3e:79:80:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.783 2 DEBUG nova.virt.libvirt.guest [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 09:59:37 compute-2 nova_compute[163961]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 09 09:59:37 compute-2 nova_compute[163961]:   <nova:name>tempest-TestNetworkBasicOps-server-1164663661</nova:name>
Oct 09 09:59:37 compute-2 nova_compute[163961]:   <nova:creationTime>2025-10-09 09:59:37</nova:creationTime>
Oct 09 09:59:37 compute-2 nova_compute[163961]:   <nova:flavor name="m1.nano">
Oct 09 09:59:37 compute-2 nova_compute[163961]:     <nova:memory>128</nova:memory>
Oct 09 09:59:37 compute-2 nova_compute[163961]:     <nova:disk>1</nova:disk>
Oct 09 09:59:37 compute-2 nova_compute[163961]:     <nova:swap>0</nova:swap>
Oct 09 09:59:37 compute-2 nova_compute[163961]:     <nova:ephemeral>0</nova:ephemeral>
Oct 09 09:59:37 compute-2 nova_compute[163961]:     <nova:vcpus>1</nova:vcpus>
Oct 09 09:59:37 compute-2 nova_compute[163961]:   </nova:flavor>
Oct 09 09:59:37 compute-2 nova_compute[163961]:   <nova:owner>
Oct 09 09:59:37 compute-2 nova_compute[163961]:     <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct 09 09:59:37 compute-2 nova_compute[163961]:     <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct 09 09:59:37 compute-2 nova_compute[163961]:   </nova:owner>
Oct 09 09:59:37 compute-2 nova_compute[163961]:   <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct 09 09:59:37 compute-2 nova_compute[163961]:   <nova:ports>
Oct 09 09:59:37 compute-2 nova_compute[163961]:     <nova:port uuid="55484b13-541c-4895-beab-bdcdaa30f4fe">
Oct 09 09:59:37 compute-2 nova_compute[163961]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 09 09:59:37 compute-2 nova_compute[163961]:     </nova:port>
Oct 09 09:59:37 compute-2 nova_compute[163961]:     <nova:port uuid="8bfb9190-a455-483f-a18f-f65db3220f30">
Oct 09 09:59:37 compute-2 nova_compute[163961]:       <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Oct 09 09:59:37 compute-2 nova_compute[163961]:     </nova:port>
Oct 09 09:59:37 compute-2 nova_compute[163961]:   </nova:ports>
Oct 09 09:59:37 compute-2 nova_compute[163961]: </nova:instance>
Oct 09 09:59:37 compute-2 nova_compute[163961]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.785 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[43a366fa-166d-4324-a0e2-8376c621664f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.800 2 DEBUG oslo_concurrency.lockutils [None req-d772e914-4df0-4b5c-b476-908f7bc71825 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "interface-c7c7e2ca-e694-465f-941e-15513c7e91ab-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 3.657s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.804 168262 DEBUG oslo.privsep.daemon [-] privsep: reply[18fc7233-497e-4326-9646-55611a13387a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.809 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[c2249c19-9598-4782-ac9c-a56990517e2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:37 compute-2 NetworkManager[984]: <info>  [1760003977.8096] manager: (tap4f792301-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.829 168262 DEBUG oslo.privsep.daemon [-] privsep: reply[36fd3e89-5a40-461f-8be5-7e6895ef8126]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.832 168262 DEBUG oslo.privsep.daemon [-] privsep: reply[a8d36180-2477-461f-a49d-a46ba4d63346]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:37 compute-2 NetworkManager[984]: <info>  [1760003977.8477] device (tap4f792301-c0): carrier: link connected
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.852 168262 DEBUG oslo.privsep.daemon [-] privsep: reply[d8d2ab23-43cf-4efa-8502-c4d51f2b35ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.864 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[dc663f1a-dafc-46c5-9114-0400eb80e92d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f792301-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:7e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 166228, 'reachable_time': 37744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 169014, 'error': None, 'target': 'ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.875 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[45abeea4-b1e6-450e-a709-6bbe690f17e5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe27:7e66'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 166228, 'tstamp': 166228}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 169015, 'error': None, 'target': 'ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.886 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[2b373977-deaa-4ebc-9f2f-9dc42827577b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4f792301-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 4], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 4], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:27:7e:66'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 166228, 'reachable_time': 37744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 169016, 'error': None, 'target': 'ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:37 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.905 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[81b1ab65-f9aa-4d26-aeac-5f050896cc7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.942 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[f252b447-00fc-43e6-b833-da97e0496db5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.943 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f792301-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.943 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.943 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f792301-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:59:37 compute-2 NetworkManager[984]: <info>  [1760003977.9456] manager: (tap4f792301-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Oct 09 09:59:37 compute-2 kernel: tap4f792301-c0: entered promiscuous mode
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.949 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4f792301-c0, col_values=(('external_ids', {'iface-id': '704a96af-9e0f-4b61-9b53-029cbdc713e8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:37 compute-2 ovn_controller[62794]: 2025-10-09T09:59:37Z|00038|binding|INFO|Releasing lport 704a96af-9e0f-4b61-9b53-029cbdc713e8 from this chassis (sb_readonly=0)
Oct 09 09:59:37 compute-2 nova_compute[163961]: 2025-10-09 09:59:37.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.963 71793 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4f792301-cf2d-455d-8ad6-8a55cc3146e9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4f792301-cf2d-455d-8ad6-8a55cc3146e9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.963 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[28ea801e-1924-4fde-81bb-2b3e444e1629]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.964 71793 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: global
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     log         /dev/log local0 debug
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     log-tag     haproxy-metadata-proxy-4f792301-cf2d-455d-8ad6-8a55cc3146e9
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     user        root
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     group       root
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     maxconn     1024
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     pidfile     /var/lib/neutron/external/pids/4f792301-cf2d-455d-8ad6-8a55cc3146e9.pid.haproxy
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     daemon
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: defaults
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     log global
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     mode http
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     option httplog
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     option dontlognull
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     option http-server-close
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     option forwardfor
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     retries                 3
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     timeout http-request    30s
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     timeout connect         30s
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     timeout client          32s
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     timeout server          32s
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     timeout http-keep-alive 30s
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: listen listener
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     bind 169.254.169.254:80
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     server metadata /var/lib/neutron/metadata_proxy
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:     http-request add-header X-OVN-Network-ID 4f792301-cf2d-455d-8ad6-8a55cc3146e9
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Oct 09 09:59:37 compute-2 ovn_metadata_agent[71788]: 2025-10-09 09:59:37.964 71793 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'env', 'PROCESS_TAG=haproxy-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4f792301-cf2d-455d-8ad6-8a55cc3146e9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Oct 09 09:59:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:59:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:59:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:59:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:59:38 compute-2 podman[169046]: 2025-10-09 09:59:38.242699355 +0000 UTC m=+0.033715565 container create 83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 09 09:59:38 compute-2 systemd[1]: Started libpod-conmon-83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87.scope.
Oct 09 09:59:38 compute-2 systemd[1]: Started libcrun container.
Oct 09 09:59:38 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3fd04afb0f5989930c0fda502bb4a65862b9bd9c1ecbb0d35d11811aaed28a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct 09 09:59:38 compute-2 podman[169046]: 2025-10-09 09:59:38.306457076 +0000 UTC m=+0.097473306 container init 83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:59:38 compute-2 podman[169046]: 2025-10-09 09:59:38.310674036 +0000 UTC m=+0.101690244 container start 83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct 09 09:59:38 compute-2 podman[169046]: 2025-10-09 09:59:38.226608103 +0000 UTC m=+0.017624332 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct 09 09:59:38 compute-2 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169058]: [NOTICE]   (169062) : New worker (169064) forked
Oct 09 09:59:38 compute-2 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169058]: [NOTICE]   (169062) : Loading success.
Oct 09 09:59:38 compute-2 nova_compute[163961]: 2025-10-09 09:59:38.404 2 DEBUG nova.compute.manager [req-60a6915b-d51e-48fa-af31-44318ac7f5c6 req-29609f5e-1586-48da-82b5-2d3e5b8d0ad4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-vif-plugged-8bfb9190-a455-483f-a18f-f65db3220f30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:59:38 compute-2 nova_compute[163961]: 2025-10-09 09:59:38.404 2 DEBUG oslo_concurrency.lockutils [req-60a6915b-d51e-48fa-af31-44318ac7f5c6 req-29609f5e-1586-48da-82b5-2d3e5b8d0ad4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:38 compute-2 nova_compute[163961]: 2025-10-09 09:59:38.405 2 DEBUG oslo_concurrency.lockutils [req-60a6915b-d51e-48fa-af31-44318ac7f5c6 req-29609f5e-1586-48da-82b5-2d3e5b8d0ad4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:38 compute-2 nova_compute[163961]: 2025-10-09 09:59:38.405 2 DEBUG oslo_concurrency.lockutils [req-60a6915b-d51e-48fa-af31-44318ac7f5c6 req-29609f5e-1586-48da-82b5-2d3e5b8d0ad4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:38 compute-2 nova_compute[163961]: 2025-10-09 09:59:38.405 2 DEBUG nova.compute.manager [req-60a6915b-d51e-48fa-af31-44318ac7f5c6 req-29609f5e-1586-48da-82b5-2d3e5b8d0ad4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] No waiting events found dispatching network-vif-plugged-8bfb9190-a455-483f-a18f-f65db3220f30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 09:59:38 compute-2 nova_compute[163961]: 2025-10-09 09:59:38.406 2 WARNING nova.compute.manager [req-60a6915b-d51e-48fa-af31-44318ac7f5c6 req-29609f5e-1586-48da-82b5-2d3e5b8d0ad4 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received unexpected event network-vif-plugged-8bfb9190-a455-483f-a18f-f65db3220f30 for instance with vm_state active and task_state None.
Oct 09 09:59:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:38.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:38 compute-2 nova_compute[163961]: 2025-10-09 09:59:38.760 2 DEBUG nova.network.neutron [req-998b156a-9be4-47e3-965c-0a26e1d151b5 req-15cd7b4c-6e7c-49e4-a002-57a07bbdc007 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updated VIF entry in instance network info cache for port 8bfb9190-a455-483f-a18f-f65db3220f30. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 09 09:59:38 compute-2 nova_compute[163961]: 2025-10-09 09:59:38.761 2 DEBUG nova.network.neutron [req-998b156a-9be4-47e3-965c-0a26e1d151b5 req-15cd7b4c-6e7c-49e4-a002-57a07bbdc007 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updating instance_info_cache with network_info: [{"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:59:38 compute-2 nova_compute[163961]: 2025-10-09 09:59:38.774 2 DEBUG oslo_concurrency.lockutils [req-998b156a-9be4-47e3-965c-0a26e1d151b5 req-15cd7b4c-6e7c-49e4-a002-57a07bbdc007 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:59:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:38 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:39 compute-2 ceph-mon[5983]: pgmap v795: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 09 09:59:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:39.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:39 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:40 compute-2 podman[169070]: 2025-10-09 09:59:40.226429203 +0000 UTC m=+0.060045345 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 09:59:40 compute-2 ovn_controller[62794]: 2025-10-09T09:59:40Z|00006|pinctrl(ovn_pinctrl1)|INFO|DHCPOFFER fa:16:3e:79:80:50 10.100.0.28
Oct 09 09:59:40 compute-2 ovn_controller[62794]: 2025-10-09T09:59:40Z|00007|pinctrl(ovn_pinctrl1)|INFO|DHCPACK fa:16:3e:79:80:50 10.100.0.28
Oct 09 09:59:40 compute-2 nova_compute[163961]: 2025-10-09 09:59:40.458 2 DEBUG nova.compute.manager [req-40830f4f-6427-4d7d-a92d-7688d995fd0c req-fbd6ed25-ce2b-45f9-b631-fd9398f8f48d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-vif-plugged-8bfb9190-a455-483f-a18f-f65db3220f30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 09:59:40 compute-2 nova_compute[163961]: 2025-10-09 09:59:40.458 2 DEBUG oslo_concurrency.lockutils [req-40830f4f-6427-4d7d-a92d-7688d995fd0c req-fbd6ed25-ce2b-45f9-b631-fd9398f8f48d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:40 compute-2 nova_compute[163961]: 2025-10-09 09:59:40.458 2 DEBUG oslo_concurrency.lockutils [req-40830f4f-6427-4d7d-a92d-7688d995fd0c req-fbd6ed25-ce2b-45f9-b631-fd9398f8f48d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:40 compute-2 nova_compute[163961]: 2025-10-09 09:59:40.459 2 DEBUG oslo_concurrency.lockutils [req-40830f4f-6427-4d7d-a92d-7688d995fd0c req-fbd6ed25-ce2b-45f9-b631-fd9398f8f48d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:40 compute-2 nova_compute[163961]: 2025-10-09 09:59:40.459 2 DEBUG nova.compute.manager [req-40830f4f-6427-4d7d-a92d-7688d995fd0c req-fbd6ed25-ce2b-45f9-b631-fd9398f8f48d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] No waiting events found dispatching network-vif-plugged-8bfb9190-a455-483f-a18f-f65db3220f30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 09:59:40 compute-2 nova_compute[163961]: 2025-10-09 09:59:40.459 2 WARNING nova.compute.manager [req-40830f4f-6427-4d7d-a92d-7688d995fd0c req-fbd6ed25-ce2b-45f9-b631-fd9398f8f48d b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received unexpected event network-vif-plugged-8bfb9190-a455-483f-a18f-f65db3220f30 for instance with vm_state active and task_state None.
Oct 09 09:59:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:40.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:40 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:41 compute-2 ceph-mon[5983]: pgmap v796: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 1 op/s
Oct 09 09:59:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:41.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:41 compute-2 nova_compute[163961]: 2025-10-09 09:59:41.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:41 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:42 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2641533928' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:42.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:42 compute-2 nova_compute[163961]: 2025-10-09 09:59:42.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:42 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:59:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:59:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:59:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:59:43 compute-2 ceph-mon[5983]: pgmap v797: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 14 KiB/s wr, 2 op/s
Oct 09 09:59:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:43.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:43 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000009s ======
Oct 09 09:59:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:44.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000009s
Oct 09 09:59:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:44 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:45 compute-2 ceph-mon[5983]: pgmap v798: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 2.0 KiB/s wr, 1 op/s
Oct 09 09:59:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:45.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:45 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #43. Immutable memtables: 0.
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.380286) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 43
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986380313, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1563, "num_deletes": 250, "total_data_size": 3946176, "memory_usage": 3994632, "flush_reason": "Manual Compaction"}
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #44: started
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986384944, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 44, "file_size": 1603308, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23183, "largest_seqno": 24741, "table_properties": {"data_size": 1598171, "index_size": 2405, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13335, "raw_average_key_size": 20, "raw_value_size": 1586997, "raw_average_value_size": 2456, "num_data_blocks": 104, "num_entries": 646, "num_filter_entries": 646, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003856, "oldest_key_time": 1760003856, "file_creation_time": 1760003986, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 4677 microseconds, and 3270 cpu microseconds.
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.384968) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #44: 1603308 bytes OK
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.384981) [db/memtable_list.cc:519] [default] Level-0 commit table #44 started
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.385542) [db/memtable_list.cc:722] [default] Level-0 commit table #44: memtable #1 done
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.385554) EVENT_LOG_v1 {"time_micros": 1760003986385551, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.385564) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 3938945, prev total WAL file size 3938945, number of live WAL files 2.
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000040.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.386258) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373531' seq:0, type:0; will stop at (end)
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [44(1565KB)], [42(13MB)]
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986386280, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [44], "files_L6": [42], "score": -1, "input_data_size": 16204156, "oldest_snapshot_seqno": -1}
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #45: 5589 keys, 13065364 bytes, temperature: kUnknown
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986423770, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 45, "file_size": 13065364, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13028526, "index_size": 21752, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14021, "raw_key_size": 140469, "raw_average_key_size": 25, "raw_value_size": 12927661, "raw_average_value_size": 2313, "num_data_blocks": 891, "num_entries": 5589, "num_filter_entries": 5589, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760003986, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 45, "seqno_to_time_mapping": "N/A"}}
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.423945) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 13065364 bytes
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.424325) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 431.9 rd, 348.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 13.9 +0.0 blob) out(12.5 +0.0 blob), read-write-amplify(18.3) write-amplify(8.1) OK, records in: 6047, records dropped: 458 output_compression: NoCompression
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.424338) EVENT_LOG_v1 {"time_micros": 1760003986424332, "job": 24, "event": "compaction_finished", "compaction_time_micros": 37522, "compaction_time_cpu_micros": 21407, "output_level": 6, "num_output_files": 1, "total_output_size": 13065364, "num_input_records": 6047, "num_output_records": 5589, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986424604, "job": 24, "event": "table_file_deletion", "file_number": 44}
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000042.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760003986426204, "job": 24, "event": "table_file_deletion", "file_number": 42}
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.386188) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.426225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.426228) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.426229) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.426231) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:59:46 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-09:59:46.426232) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 09:59:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:46.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:46 compute-2 nova_compute[163961]: 2025-10-09 09:59:46.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:46 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:47 compute-2 ceph-mon[5983]: pgmap v799: 337 pgs: 337 active+clean; 121 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 2.0 KiB/s wr, 1 op/s
Oct 09 09:59:47 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3422652302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:59:47 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2796860595' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 09:59:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:47.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:47 compute-2 nova_compute[163961]: 2025-10-09 09:59:47.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:47 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:59:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:59:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:59:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:59:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:48.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:48 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:49 compute-2 ceph-mon[5983]: pgmap v800: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct 09 09:59:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:49.573 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:49 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:50 compute-2 podman[169103]: 2025-10-09 09:59:50.207401178 +0000 UTC m=+0.039886465 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 09:59:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 09:59:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:50.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:50 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:51 compute-2 ceph-mon[5983]: pgmap v801: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 09:59:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:51.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:51 compute-2 nova_compute[163961]: 2025-10-09 09:59:51.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:51 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:52 compute-2 sudo[169123]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 09:59:52 compute-2 sudo[169123]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 09:59:52 compute-2 sudo[169123]: pam_unix(sudo:session): session closed for user root
Oct 09 09:59:52 compute-2 ceph-mon[5983]: pgmap v802: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Oct 09 09:59:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:52.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:52 compute-2 nova_compute[163961]: 2025-10-09 09:59:52.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:52 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:59:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:59:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:59:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:59:53 compute-2 nova_compute[163961]: 2025-10-09 09:59:53.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:59:53 compute-2 nova_compute[163961]: 2025-10-09 09:59:53.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:59:53 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1780233837' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:53 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3096639622' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:53.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:53 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.167 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.171 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.188 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.188 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.188 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.189 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.189 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:59:54 compute-2 ceph-mon[5983]: pgmap v803: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 09 09:59:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:54.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:59:54 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2606411561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.550 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.604 2 DEBUG nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 09 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.605 2 DEBUG nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231
Oct 09 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.822 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.824 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4862MB free_disk=59.92177200317383GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.824 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.825 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.875 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Instance c7c7e2ca-e694-465f-941e-15513c7e91ab actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Oct 09 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.875 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.875 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=4 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 09:59:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:54 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:54 compute-2 nova_compute[163961]: 2025-10-09 09:59:54.900 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:59:55 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:59:55 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3859217005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.251 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.255 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating inventory in ProviderTree for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 09 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.281 2 ERROR nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] [req-36fd6b6a-bf1f-43c1-8762-07164ceae307] Failed to update inventory to [{'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 41a86af9-054a-49c9-9d2e-f0396c1c31a8.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-36fd6b6a-bf1f-43c1-8762-07164ceae307"}]}
Oct 09 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.295 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing inventories for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 09 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.309 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating ProviderTree inventory for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 09 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.310 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating inventory in ProviderTree for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 09 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.323 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing aggregate associations for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 09 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.338 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing trait associations for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8, traits: HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,HW_CPU_X86_AVX512VAES,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 09 09:59:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2606411561' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3859217005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.362 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 09:59:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:55.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:55 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 09:59:55 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/136677374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.702 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.705 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating inventory in ProviderTree for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with inventory: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 09 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.739 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updated inventory for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7680, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 4, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Oct 09 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.739 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Oct 09 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.739 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating inventory in ProviderTree for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with inventory: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 09 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.755 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 09:59:55 compute-2 nova_compute[163961]: 2025-10-09 09:59:55.755 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.930s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 09:59:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:55 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:56 compute-2 ceph-mon[5983]: pgmap v804: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 09 09:59:56 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1814886035' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:56 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/136677374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:56 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2956790210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 09:59:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 09:59:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:56.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 09:59:56 compute-2 nova_compute[163961]: 2025-10-09 09:59:56.756 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:59:56 compute-2 nova_compute[163961]: 2025-10-09 09:59:56.756 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 09:59:56 compute-2 nova_compute[163961]: 2025-10-09 09:59:56.757 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 09:59:56 compute-2 nova_compute[163961]: 2025-10-09 09:59:56.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:56 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:57 compute-2 nova_compute[163961]: 2025-10-09 09:59:57.080 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 09:59:57 compute-2 nova_compute[163961]: 2025-10-09 09:59:57.080 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquired lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 09:59:57 compute-2 nova_compute[163961]: 2025-10-09 09:59:57.080 2 DEBUG nova.network.neutron [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Oct 09 09:59:57 compute-2 nova_compute[163961]: 2025-10-09 09:59:57.080 2 DEBUG nova.objects.instance [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c7c7e2ca-e694-465f-941e-15513c7e91ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 09:59:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:57.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:57 compute-2 nova_compute[163961]: 2025-10-09 09:59:57.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 09:59:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:57 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 09:59:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 09:59:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 09:59:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 09:59:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 09:59:58 compute-2 nova_compute[163961]: 2025-10-09 09:59:58.404 2 DEBUG nova.network.neutron [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updating instance_info_cache with network_info: [{"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 09:59:58 compute-2 nova_compute[163961]: 2025-10-09 09:59:58.420 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Releasing lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 09:59:58 compute-2 nova_compute[163961]: 2025-10-09 09:59:58.420 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Oct 09 09:59:58 compute-2 nova_compute[163961]: 2025-10-09 09:59:58.420 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:59:58 compute-2 nova_compute[163961]: 2025-10-09 09:59:58.420 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:59:58 compute-2 nova_compute[163961]: 2025-10-09 09:59:58.421 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 09:59:58 compute-2 ceph-mon[5983]: pgmap v805: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Oct 09 09:59:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:09:59:58.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:58 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 09:59:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 09:59:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 09:59:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:09:59:59.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 09:59:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 09:59:59 2025: (VI_0) received an invalid passwd!
Oct 09 09:59:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 09:59:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:00 compute-2 ceph-mon[5983]: pgmap v806: 337 pgs: 337 active+clean; 167 MiB data, 327 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 18 KiB/s wr, 75 op/s
Oct 09 10:00:00 compute-2 ceph-mon[5983]: overall HEALTH_WARN 1 failed cephadm daemon(s)
Oct 09 10:00:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:00.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:01 compute-2 systemd[1]: Starting system activity accounting tool...
Oct 09 10:00:01 compute-2 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct 09 10:00:01 compute-2 systemd[1]: Finished system activity accounting tool.
Oct 09 10:00:01 compute-2 podman[169223]: 2025-10-09 10:00:01.199995722 +0000 UTC m=+0.034283454 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent)
Oct 09 10:00:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:01.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:01 compute-2 nova_compute[163961]: 2025-10-09 10:00:01.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:02 compute-2 ceph-mon[5983]: pgmap v807: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 138 op/s
Oct 09 10:00:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:00:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:02.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:00:02 compute-2 nova_compute[163961]: 2025-10-09 10:00:02.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:00:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:00:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:00:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:00:03 compute-2 podman[169241]: 2025-10-09 10:00:03.206088934 +0000 UTC m=+0.037929758 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 09 10:00:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:00:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:03.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:00:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:04 compute-2 ceph-mon[5983]: pgmap v808: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 09 10:00:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:00:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:04.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:00:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:00:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:05.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:06 compute-2 ceph-mon[5983]: pgmap v809: 337 pgs: 337 active+clean; 200 MiB data, 352 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 09 10:00:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:06.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:06 compute-2 nova_compute[163961]: 2025-10-09 10:00:06.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:07.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:07 compute-2 nova_compute[163961]: 2025-10-09 10:00:07.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:00:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:00:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:00:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:00:08 compute-2 nova_compute[163961]: 2025-10-09 10:00:08.349 2 DEBUG nova.compute.manager [req-00860507-edf9-4dec-b057-d51f5c13f014 req-db2123a0-185f-46db-a500-8461c9451fe1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-changed-8bfb9190-a455-483f-a18f-f65db3220f30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:00:08 compute-2 nova_compute[163961]: 2025-10-09 10:00:08.350 2 DEBUG nova.compute.manager [req-00860507-edf9-4dec-b057-d51f5c13f014 req-db2123a0-185f-46db-a500-8461c9451fe1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Refreshing instance network info cache due to event network-changed-8bfb9190-a455-483f-a18f-f65db3220f30. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 10:00:08 compute-2 nova_compute[163961]: 2025-10-09 10:00:08.350 2 DEBUG oslo_concurrency.lockutils [req-00860507-edf9-4dec-b057-d51f5c13f014 req-db2123a0-185f-46db-a500-8461c9451fe1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 10:00:08 compute-2 nova_compute[163961]: 2025-10-09 10:00:08.350 2 DEBUG oslo_concurrency.lockutils [req-00860507-edf9-4dec-b057-d51f5c13f014 req-db2123a0-185f-46db-a500-8461c9451fe1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 10:00:08 compute-2 nova_compute[163961]: 2025-10-09 10:00:08.350 2 DEBUG nova.network.neutron [req-00860507-edf9-4dec-b057-d51f5c13f014 req-db2123a0-185f-46db-a500-8461c9451fe1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Refreshing network info cache for port 8bfb9190-a455-483f-a18f-f65db3220f30 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 10:00:08 compute-2 ceph-mon[5983]: pgmap v810: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 09 10:00:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:08.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:09 compute-2 nova_compute[163961]: 2025-10-09 10:00:09.282 2 DEBUG nova.network.neutron [req-00860507-edf9-4dec-b057-d51f5c13f014 req-db2123a0-185f-46db-a500-8461c9451fe1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updated VIF entry in instance network info cache for port 8bfb9190-a455-483f-a18f-f65db3220f30. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Oct 09 10:00:09 compute-2 nova_compute[163961]: 2025-10-09 10:00:09.283 2 DEBUG nova.network.neutron [req-00860507-edf9-4dec-b057-d51f5c13f014 req-db2123a0-185f-46db-a500-8461c9451fe1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updating instance_info_cache with network_info: [{"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:00:09 compute-2 nova_compute[163961]: 2025-10-09 10:00:09.296 2 DEBUG oslo_concurrency.lockutils [req-00860507-edf9-4dec-b057-d51f5c13f014 req-db2123a0-185f-46db-a500-8461c9451fe1 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 10:00:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:00:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:09.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:00:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:10.278 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:00:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:10.279 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:00:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:10.279 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:00:10 compute-2 ceph-mon[5983]: pgmap v811: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 09 10:00:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:00:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:10.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:00:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:11 compute-2 podman[169266]: 2025-10-09 10:00:11.217656186 +0000 UTC m=+0.053861709 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 09 10:00:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:11.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:11 compute-2 nova_compute[163961]: 2025-10-09 10:00:11.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:12 compute-2 sudo[169291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:00:12 compute-2 sudo[169291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:00:12 compute-2 sudo[169291]: pam_unix(sudo:session): session closed for user root
Oct 09 10:00:12 compute-2 ceph-mon[5983]: pgmap v812: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 307 KiB/s rd, 2.2 MiB/s wr, 66 op/s
Oct 09 10:00:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/2285839819' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:00:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/2285839819' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:00:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:12.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:12 compute-2 nova_compute[163961]: 2025-10-09 10:00:12.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:00:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:00:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:00:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:00:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:13.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:00:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:14.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:00:14 compute-2 ceph-mon[5983]: pgmap v813: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 21 KiB/s wr, 3 op/s
Oct 09 10:00:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:15.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:00:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:16.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:00:16 compute-2 ceph-mon[5983]: pgmap v814: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 21 KiB/s wr, 3 op/s
Oct 09 10:00:16 compute-2 nova_compute[163961]: 2025-10-09 10:00:16.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:17.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:17 compute-2 nova_compute[163961]: 2025-10-09 10:00:17.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:00:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:00:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:00:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:00:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:18.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:18 compute-2 ceph-mon[5983]: pgmap v815: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 27 KiB/s wr, 4 op/s
Oct 09 10:00:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:19.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:20.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:20 compute-2 ceph-mon[5983]: pgmap v816: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 15 KiB/s wr, 2 op/s
Oct 09 10:00:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:00:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:21 compute-2 podman[169325]: 2025-10-09 10:00:21.21255292 +0000 UTC m=+0.041865818 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #46. Immutable memtables: 0.
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.393990) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 46
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021394019, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 591, "num_deletes": 257, "total_data_size": 926600, "memory_usage": 939392, "flush_reason": "Manual Compaction"}
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #47: started
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021397375, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 47, "file_size": 609209, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24746, "largest_seqno": 25332, "table_properties": {"data_size": 606288, "index_size": 893, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6684, "raw_average_key_size": 17, "raw_value_size": 600364, "raw_average_value_size": 1592, "num_data_blocks": 41, "num_entries": 377, "num_filter_entries": 377, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760003987, "oldest_key_time": 1760003987, "file_creation_time": 1760004021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 3409 microseconds, and 2536 cpu microseconds.
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.397400) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #47: 609209 bytes OK
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.397410) [db/memtable_list.cc:519] [default] Level-0 commit table #47 started
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398297) [db/memtable_list.cc:722] [default] Level-0 commit table #47: memtable #1 done
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398309) EVENT_LOG_v1 {"time_micros": 1760004021398306, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398317) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 923226, prev total WAL file size 923226, number of live WAL files 2.
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000043.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398597) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353035' seq:0, type:0; will stop at (end)
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [47(594KB)], [45(12MB)]
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021398621, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [47], "files_L6": [45], "score": -1, "input_data_size": 13674573, "oldest_snapshot_seqno": -1}
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #48: 5444 keys, 13531201 bytes, temperature: kUnknown
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021426368, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 48, "file_size": 13531201, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13494479, "index_size": 22020, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13637, "raw_key_size": 138662, "raw_average_key_size": 25, "raw_value_size": 13395310, "raw_average_value_size": 2460, "num_data_blocks": 899, "num_entries": 5444, "num_filter_entries": 5444, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760004021, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 48, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.426494) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13531201 bytes
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.436016) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 492.2 rd, 487.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 12.5 +0.0 blob) out(12.9 +0.0 blob), read-write-amplify(44.7) write-amplify(22.2) OK, records in: 5966, records dropped: 522 output_compression: NoCompression
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.436031) EVENT_LOG_v1 {"time_micros": 1760004021436024, "job": 26, "event": "compaction_finished", "compaction_time_micros": 27782, "compaction_time_cpu_micros": 19207, "output_level": 6, "num_output_files": 1, "total_output_size": 13531201, "num_input_records": 5966, "num_output_records": 5444, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021436167, "job": 26, "event": "table_file_deletion", "file_number": 47}
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000045.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004021437504, "job": 26, "event": "table_file_deletion", "file_number": 45}
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.398556) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.437537) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.437540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.437541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.437542) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:00:21 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:00:21.437543) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:00:21 compute-2 sudo[169342]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:00:21 compute-2 sudo[169342]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:00:21 compute-2 sudo[169342]: pam_unix(sudo:session): session closed for user root
Oct 09 10:00:21 compute-2 sudo[169367]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 10:00:21 compute-2 sudo[169367]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:00:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:21.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:21 compute-2 nova_compute[163961]: 2025-10-09 10:00:21.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:21 compute-2 sudo[169367]: pam_unix(sudo:session): session closed for user root
Oct 09 10:00:22 compute-2 ceph-mon[5983]: pgmap v817: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 18 KiB/s wr, 3 op/s
Oct 09 10:00:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:00:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:00:22 compute-2 ceph-mon[5983]: pgmap v818: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 9.8 KiB/s wr, 2 op/s
Oct 09 10:00:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:00:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:00:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:00:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:00:22 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:00:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:22.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:22 compute-2 nova_compute[163961]: 2025-10-09 10:00:22.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:00:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:00:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:00:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:00:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:23.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:24.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:25 compute-2 ceph-mon[5983]: pgmap v819: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 9.8 KiB/s wr, 2 op/s
Oct 09 10:00:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:25.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:25 compute-2 sudo[169426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:00:25 compute-2 sudo[169426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:00:25 compute-2 sudo[169426]: pam_unix(sudo:session): session closed for user root
Oct 09 10:00:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:26.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:26 compute-2 nova_compute[163961]: 2025-10-09 10:00:26.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:26 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:00:26 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:00:26 compute-2 ceph-mon[5983]: pgmap v820: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 11 KiB/s wr, 2 op/s
Oct 09 10:00:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:00:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:00:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:00:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:00:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:27.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:27 compute-2 nova_compute[163961]: 2025-10-09 10:00:27.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:28.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:29 compute-2 ceph-mon[5983]: pgmap v821: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 4.5 KiB/s wr, 1 op/s
Oct 09 10:00:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:29.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:30.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:31 compute-2 ceph-mon[5983]: pgmap v822: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 4.5 KiB/s wr, 1 op/s
Oct 09 10:00:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:31.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:31 compute-2 nova_compute[163961]: 2025-10-09 10:00:31.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:00:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:00:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:00:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:00:32 compute-2 podman[169457]: 2025-10-09 10:00:32.20483604 +0000 UTC m=+0.039190895 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 09 10:00:32 compute-2 sudo[169474]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:00:32 compute-2 sudo[169474]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:00:32 compute-2 sudo[169474]: pam_unix(sudo:session): session closed for user root
Oct 09 10:00:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:32.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:32 compute-2 nova_compute[163961]: 2025-10-09 10:00:32.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:33 compute-2 ceph-mon[5983]: pgmap v823: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 4.9 KiB/s wr, 2 op/s
Oct 09 10:00:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:33.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:34 compute-2 podman[169500]: 2025-10-09 10:00:34.234554939 +0000 UTC m=+0.070174758 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 09 10:00:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:34.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:35 compute-2 ceph-mon[5983]: pgmap v824: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 4.3 KiB/s wr, 1 op/s
Oct 09 10:00:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:00:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:35.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:00:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:00:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:00:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:00:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:36.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:36 compute-2 nova_compute[163961]: 2025-10-09 10:00:36.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:37 compute-2 ceph-mon[5983]: pgmap v825: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 6.3 KiB/s wr, 2 op/s
Oct 09 10:00:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:37.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:37 compute-2 nova_compute[163961]: 2025-10-09 10:00:37.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:38 compute-2 ovn_controller[62794]: 2025-10-09T10:00:38Z|00039|memory_trim|INFO|Detected inactivity (last active 30016 ms ago): trimming memory
Oct 09 10:00:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:38.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:39 compute-2 ceph-mon[5983]: pgmap v826: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 5.3 KiB/s wr, 1 op/s
Oct 09 10:00:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:39.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:40.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:00:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:00:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:00:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:00:41 compute-2 ceph-mon[5983]: pgmap v827: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 5.3 KiB/s wr, 1 op/s
Oct 09 10:00:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:41.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:41 compute-2 nova_compute[163961]: 2025-10-09 10:00:41.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:42 compute-2 podman[169526]: 2025-10-09 10:00:42.220405896 +0000 UTC m=+0.055640477 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct 09 10:00:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:42.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:42 compute-2 nova_compute[163961]: 2025-10-09 10:00:42.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:43 compute-2 ceph-mon[5983]: pgmap v828: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 6.3 KiB/s wr, 2 op/s
Oct 09 10:00:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:43.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:44.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:45 compute-2 ceph-mon[5983]: pgmap v829: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 3.0 KiB/s wr, 1 op/s
Oct 09 10:00:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:45.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:00:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:00:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:00:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:00:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:46.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:46 compute-2 nova_compute[163961]: 2025-10-09 10:00:46.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:47 compute-2 ceph-mon[5983]: pgmap v830: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 4.7 KiB/s wr, 2 op/s
Oct 09 10:00:47 compute-2 nova_compute[163961]: 2025-10-09 10:00:47.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:47 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:47.449 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:00:47 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:47.449 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 10:00:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:47.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:47 compute-2 nova_compute[163961]: 2025-10-09 10:00:47.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3430617062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:48.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:49 compute-2 ceph-mon[5983]: pgmap v831: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 2.7 KiB/s wr, 1 op/s
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.299 2 DEBUG oslo_concurrency.lockutils [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "interface-c7c7e2ca-e694-465f-941e-15513c7e91ab-8bfb9190-a455-483f-a18f-f65db3220f30" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.299 2 DEBUG oslo_concurrency.lockutils [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "interface-c7c7e2ca-e694-465f-941e-15513c7e91ab-8bfb9190-a455-483f-a18f-f65db3220f30" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.314 2 DEBUG nova.objects.instance [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'flavor' on Instance uuid c7c7e2ca-e694-465f-941e-15513c7e91ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.328 2 DEBUG nova.virt.libvirt.vif [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:59:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1164663661',display_name='tempest-TestNetworkBasicOps-server-1164663661',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1164663661',id=6,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJekZCUuyZFfRi4sqQ/mP7Ozivo49QKXFHHjMUzJNdIpXHKQgOnPPcVpZjnx45IP0IUXYjxjP4OCv7gqvDPFNQ0nZIMIyF69sokT4DnjnPbGTb16o+q+6RbNVaDlRNZ6mw==',key_name='tempest-TestNetworkBasicOps-1319761674',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:59:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-mxb6tzm8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:59:11Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=c7c7e2ca-e694-465f-941e-15513c7e91ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.328 2 DEBUG nova.network.os_vif_util [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.329 2 DEBUG nova.network.os_vif_util [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:79:80:50,bridge_name='br-int',has_traffic_filtering=True,id=8bfb9190-a455-483f-a18f-f65db3220f30,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bfb9190-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.331 2 DEBUG nova.virt.libvirt.guest [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:79:80:50"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8bfb9190-a4"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.332 2 DEBUG nova.virt.libvirt.guest [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:79:80:50"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8bfb9190-a4"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.334 2 DEBUG nova.virt.libvirt.driver [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Attempting to detach device tap8bfb9190-a4 from instance c7c7e2ca-e694-465f-941e-15513c7e91ab from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.334 2 DEBUG nova.virt.libvirt.guest [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] detach device xml: <interface type="ethernet">
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <mac address="fa:16:3e:79:80:50"/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <model type="virtio"/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <driver name="vhost" rx_queue_size="512"/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <mtu size="1442"/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <target dev="tap8bfb9190-a4"/>
Oct 09 10:00:49 compute-2 nova_compute[163961]: </interface>
Oct 09 10:00:49 compute-2 nova_compute[163961]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.337 2 DEBUG nova.virt.libvirt.guest [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:79:80:50"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8bfb9190-a4"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.339 2 DEBUG nova.virt.libvirt.guest [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:79:80:50"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8bfb9190-a4"/></interface>not found in domain: <domain type='kvm' id='1'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <name>instance-00000006</name>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <uuid>c7c7e2ca-e694-465f-941e-15513c7e91ab</uuid>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <metadata>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <nova:name>tempest-TestNetworkBasicOps-server-1164663661</nova:name>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <nova:creationTime>2025-10-09 09:59:37</nova:creationTime>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <nova:flavor name="m1.nano">
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:memory>128</nova:memory>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:disk>1</nova:disk>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:swap>0</nova:swap>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:ephemeral>0</nova:ephemeral>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:vcpus>1</nova:vcpus>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </nova:flavor>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <nova:owner>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </nova:owner>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <nova:ports>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:port uuid="55484b13-541c-4895-beab-bdcdaa30f4fe">
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </nova:port>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:port uuid="8bfb9190-a455-483f-a18f-f65db3220f30">
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </nova:port>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </nova:ports>
Oct 09 10:00:49 compute-2 nova_compute[163961]: </nova:instance>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </metadata>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <memory unit='KiB'>131072</memory>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <vcpu placement='static'>1</vcpu>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <resource>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <partition>/machine</partition>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </resource>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <sysinfo type='smbios'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <system>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <entry name='manufacturer'>RDO</entry>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <entry name='product'>OpenStack Compute</entry>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <entry name='serial'>c7c7e2ca-e694-465f-941e-15513c7e91ab</entry>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <entry name='uuid'>c7c7e2ca-e694-465f-941e-15513c7e91ab</entry>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <entry name='family'>Virtual Machine</entry>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </system>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </sysinfo>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <os>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <boot dev='hd'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <smbios mode='sysinfo'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </os>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <features>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <acpi/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <apic/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <vmcoreinfo state='on'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </features>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <cpu mode='custom' match='exact' check='full'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <model fallback='forbid'>EPYC-Milan</model>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <vendor>AMD</vendor>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='x2apic'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='tsc-deadline'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='hypervisor'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='tsc_adjust'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='vaes'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='vpclmulqdq'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='spec-ctrl'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='stibp'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='arch-capabilities'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='ssbd'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='cmp_legacy'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='overflow-recov'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='succor'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='virt-ssbd'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='lbrv'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='tsc-scale'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='vmcb-clean'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='flushbyasid'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='pause-filter'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='pfthreshold'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='v-vmsave-vmload'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='vgif'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='rdctl-no'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='mds-no'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='pschange-mc-no'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='gds-no'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='rfds-no'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='svm'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='topoext'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='npt'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='nrip-save'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </cpu>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <clock offset='utc'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <timer name='pit' tickpolicy='delay'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <timer name='hpet' present='no'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </clock>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <on_poweroff>destroy</on_poweroff>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <on_reboot>restart</on_reboot>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <on_crash>destroy</on_crash>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <devices>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <disk type='network' device='disk'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <driver name='qemu' type='raw' cache='none'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <auth username='openstack'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:         <secret type='ceph' uuid='286f8bf0-da72-5823-9a4e-ac4457d9e609'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       </auth>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <source protocol='rbd' name='vms/c7c7e2ca-e694-465f-941e-15513c7e91ab_disk' index='2'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:         <host name='192.168.122.100' port='6789'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:         <host name='192.168.122.102' port='6789'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:         <host name='192.168.122.101' port='6789'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       </source>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target dev='vda' bus='virtio'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='virtio-disk0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </disk>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <disk type='network' device='cdrom'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <driver name='qemu' type='raw' cache='none'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <auth username='openstack'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:         <secret type='ceph' uuid='286f8bf0-da72-5823-9a4e-ac4457d9e609'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       </auth>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <source protocol='rbd' name='vms/c7c7e2ca-e694-465f-941e-15513c7e91ab_disk.config' index='1'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:         <host name='192.168.122.100' port='6789'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:         <host name='192.168.122.102' port='6789'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:         <host name='192.168.122.101' port='6789'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       </source>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target dev='sda' bus='sata'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <readonly/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='sata0-0-0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </disk>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='0' model='pcie-root'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pcie.0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='1' port='0x10'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.1'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='2' port='0x11'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.2'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='3' port='0x12'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.3'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='4' port='0x13'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.4'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='5' port='0x14'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.5'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='6' port='0x15'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.6'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='7' port='0x16'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.7'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='8' port='0x17'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.8'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='9' port='0x18'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.9'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='10' port='0x19'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.10'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='11' port='0x1a'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.11'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='12' port='0x1b'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.12'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='13' port='0x1c'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.13'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='14' port='0x1d'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.14'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='15' port='0x1e'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.15'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='16' port='0x1f'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.16'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='17' port='0x20'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.17'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='18' port='0x21'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.18'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='19' port='0x22'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.19'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='20' port='0x23'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.20'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='21' port='0x24'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.21'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='22' port='0x25'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.22'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='23' port='0x26'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.23'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='24' port='0x27'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.24'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='25' port='0x28'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.25'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-pci-bridge'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.26'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='usb'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='sata' index='0'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='ide'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <interface type='ethernet'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <mac address='fa:16:3e:d9:1b:2f'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target dev='tap55484b13-54'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model type='virtio'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <driver name='vhost' rx_queue_size='512'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <mtu size='1442'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='net0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </interface>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <interface type='ethernet'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <mac address='fa:16:3e:79:80:50'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target dev='tap8bfb9190-a4'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model type='virtio'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <driver name='vhost' rx_queue_size='512'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <mtu size='1442'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='net1'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </interface>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <serial type='pty'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <source path='/dev/pts/0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <log file='/var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/console.log' append='off'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target type='isa-serial' port='0'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:         <model name='isa-serial'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       </target>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='serial0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </serial>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <console type='pty' tty='/dev/pts/0'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <source path='/dev/pts/0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <log file='/var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/console.log' append='off'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target type='serial' port='0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='serial0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </console>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <input type='tablet' bus='usb'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='input0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='usb' bus='0' port='1'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </input>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <input type='mouse' bus='ps2'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='input1'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </input>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <input type='keyboard' bus='ps2'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='input2'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </input>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <listen type='address' address='::0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </graphics>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <audio id='1' type='none'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <video>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model type='virtio' heads='1' primary='yes'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='video0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </video>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <watchdog model='itco' action='reset'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='watchdog0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </watchdog>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <memballoon model='virtio'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <stats period='10'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='balloon0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </memballoon>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <rng model='virtio'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <backend model='random'>/dev/urandom</backend>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='rng0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </rng>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </devices>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <label>system_u:system_r:svirt_t:s0:c477,c914</label>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c477,c914</imagelabel>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </seclabel>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <label>+107:+107</label>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <imagelabel>+107:+107</imagelabel>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </seclabel>
Oct 09 10:00:49 compute-2 nova_compute[163961]: </domain>
Oct 09 10:00:49 compute-2 nova_compute[163961]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.339 2 INFO nova.virt.libvirt.driver [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully detached device tap8bfb9190-a4 from instance c7c7e2ca-e694-465f-941e-15513c7e91ab from the persistent domain config.
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.340 2 DEBUG nova.virt.libvirt.driver [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] (1/8): Attempting to detach device tap8bfb9190-a4 with device alias net1 from instance c7c7e2ca-e694-465f-941e-15513c7e91ab from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.340 2 DEBUG nova.virt.libvirt.guest [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] detach device xml: <interface type="ethernet">
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <mac address="fa:16:3e:79:80:50"/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <model type="virtio"/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <driver name="vhost" rx_queue_size="512"/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <mtu size="1442"/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <target dev="tap8bfb9190-a4"/>
Oct 09 10:00:49 compute-2 nova_compute[163961]: </interface>
Oct 09 10:00:49 compute-2 nova_compute[163961]:  detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465
Oct 09 10:00:49 compute-2 kernel: tap8bfb9190-a4 (unregistering): left promiscuous mode
Oct 09 10:00:49 compute-2 NetworkManager[984]: <info>  [1760004049.4362] device (tap8bfb9190-a4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 10:00:49 compute-2 ovn_controller[62794]: 2025-10-09T10:00:49Z|00040|binding|INFO|Releasing lport 8bfb9190-a455-483f-a18f-f65db3220f30 from this chassis (sb_readonly=0)
Oct 09 10:00:49 compute-2 ovn_controller[62794]: 2025-10-09T10:00:49Z|00041|binding|INFO|Setting lport 8bfb9190-a455-483f-a18f-f65db3220f30 down in Southbound
Oct 09 10:00:49 compute-2 ovn_controller[62794]: 2025-10-09T10:00:49Z|00042|binding|INFO|Removing iface tap8bfb9190-a4 ovn-installed in OVS
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.453 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:79:80:50 10.100.0.28', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.28/28', 'neutron:device_id': 'c7c7e2ca-e694-465f-941e-15513c7e91ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8a09146a-9f3c-432d-a7ac-1e34c91ed6bf, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f38807e66d0>], logical_port=8bfb9190-a455-483f-a18f-f65db3220f30) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f38807e66d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.455 71793 INFO neutron.agent.ovn.metadata.agent [-] Port 8bfb9190-a455-483f-a18f-f65db3220f30 in datapath 4f792301-cf2d-455d-8ad6-8a55cc3146e9 unbound from our chassis
Oct 09 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.456 71793 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4f792301-cf2d-455d-8ad6-8a55cc3146e9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 09 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.457 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[406f3ba4-adb8-470c-86ce-feae033adf39]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.457 71793 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9 namespace which is not needed anymore
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.456 2 DEBUG nova.virt.libvirt.driver [None req-631b30e5-835a-4bf0-a4ad-cb02ddb5bdb8 - - - - - -] Received event <DeviceRemovedEvent: 1760004049.4561546, c7c7e2ca-e694-465f-941e-15513c7e91ab => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.457 2 DEBUG nova.virt.libvirt.driver [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Start waiting for the detach event from libvirt for device tap8bfb9190-a4 with device alias net1 for instance c7c7e2ca-e694-465f-941e-15513c7e91ab _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.457 2 DEBUG nova.virt.libvirt.guest [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:79:80:50"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8bfb9190-a4"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.462 2 DEBUG nova.virt.libvirt.guest [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:79:80:50"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap8bfb9190-a4"/></interface>not found in domain: <domain type='kvm' id='1'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <name>instance-00000006</name>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <uuid>c7c7e2ca-e694-465f-941e-15513c7e91ab</uuid>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <metadata>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <nova:name>tempest-TestNetworkBasicOps-server-1164663661</nova:name>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <nova:creationTime>2025-10-09 09:59:37</nova:creationTime>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <nova:flavor name="m1.nano">
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:memory>128</nova:memory>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:disk>1</nova:disk>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:swap>0</nova:swap>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:ephemeral>0</nova:ephemeral>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:vcpus>1</nova:vcpus>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </nova:flavor>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <nova:owner>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </nova:owner>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <nova:ports>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:port uuid="55484b13-541c-4895-beab-bdcdaa30f4fe">
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </nova:port>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:port uuid="8bfb9190-a455-483f-a18f-f65db3220f30">
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <nova:ip type="fixed" address="10.100.0.28" ipVersion="4"/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </nova:port>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </nova:ports>
Oct 09 10:00:49 compute-2 nova_compute[163961]: </nova:instance>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </metadata>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <memory unit='KiB'>131072</memory>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <currentMemory unit='KiB'>131072</currentMemory>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <vcpu placement='static'>1</vcpu>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <resource>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <partition>/machine</partition>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </resource>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <sysinfo type='smbios'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <system>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <entry name='manufacturer'>RDO</entry>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <entry name='product'>OpenStack Compute</entry>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <entry name='serial'>c7c7e2ca-e694-465f-941e-15513c7e91ab</entry>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <entry name='uuid'>c7c7e2ca-e694-465f-941e-15513c7e91ab</entry>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <entry name='family'>Virtual Machine</entry>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </system>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </sysinfo>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <os>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <boot dev='hd'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <smbios mode='sysinfo'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </os>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <features>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <acpi/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <apic/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <vmcoreinfo state='on'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </features>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <cpu mode='custom' match='exact' check='full'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <model fallback='forbid'>EPYC-Milan</model>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <vendor>AMD</vendor>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='x2apic'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='tsc-deadline'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='hypervisor'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='tsc_adjust'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='vaes'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='vpclmulqdq'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='spec-ctrl'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='stibp'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='arch-capabilities'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='ssbd'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='cmp_legacy'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='overflow-recov'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='succor'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='virt-ssbd'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='lbrv'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='tsc-scale'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='vmcb-clean'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='flushbyasid'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='pause-filter'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='pfthreshold'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='v-vmsave-vmload'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='vgif'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='lfence-always-serializing'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='rdctl-no'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='mds-no'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='pschange-mc-no'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='gds-no'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='rfds-no'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='svm'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='require' name='topoext'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='npt'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='nrip-save'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <feature policy='disable' name='svme-addr-chk'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </cpu>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <clock offset='utc'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <timer name='pit' tickpolicy='delay'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <timer name='rtc' tickpolicy='catchup'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <timer name='hpet' present='no'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </clock>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <on_poweroff>destroy</on_poweroff>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <on_reboot>restart</on_reboot>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <on_crash>destroy</on_crash>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <devices>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <emulator>/usr/libexec/qemu-kvm</emulator>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <disk type='network' device='disk'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <driver name='qemu' type='raw' cache='none'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <auth username='openstack'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:         <secret type='ceph' uuid='286f8bf0-da72-5823-9a4e-ac4457d9e609'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       </auth>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <source protocol='rbd' name='vms/c7c7e2ca-e694-465f-941e-15513c7e91ab_disk' index='2'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:         <host name='192.168.122.100' port='6789'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:         <host name='192.168.122.102' port='6789'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:         <host name='192.168.122.101' port='6789'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       </source>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target dev='vda' bus='virtio'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='virtio-disk0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </disk>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <disk type='network' device='cdrom'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <driver name='qemu' type='raw' cache='none'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <auth username='openstack'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:         <secret type='ceph' uuid='286f8bf0-da72-5823-9a4e-ac4457d9e609'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       </auth>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <source protocol='rbd' name='vms/c7c7e2ca-e694-465f-941e-15513c7e91ab_disk.config' index='1'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:         <host name='192.168.122.100' port='6789'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:         <host name='192.168.122.102' port='6789'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:         <host name='192.168.122.101' port='6789'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       </source>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target dev='sda' bus='sata'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <readonly/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='sata0-0-0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </disk>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='0' model='pcie-root'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pcie.0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='1' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='1' port='0x10'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.1'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='2' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='2' port='0x11'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.2'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='3' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='3' port='0x12'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.3'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='4' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='4' port='0x13'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.4'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='5' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='5' port='0x14'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.5'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='6' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='6' port='0x15'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.6'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='7' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='7' port='0x16'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.7'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='8' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='8' port='0x17'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.8'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='9' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='9' port='0x18'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.9'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='10' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='10' port='0x19'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.10'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='11' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='11' port='0x1a'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.11'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='12' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='12' port='0x1b'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.12'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='13' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='13' port='0x1c'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.13'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='14' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='14' port='0x1d'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.14'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='15' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='15' port='0x1e'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.15'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='16' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='16' port='0x1f'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.16'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='17' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='17' port='0x20'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.17'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='18' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='18' port='0x21'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.18'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='19' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='19' port='0x22'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.19'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='20' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='20' port='0x23'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.20'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='21' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='21' port='0x24'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.21'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='22' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='22' port='0x25'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.22'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='23' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='23' port='0x26'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.23'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='24' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='24' port='0x27'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.24'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='25' model='pcie-root-port'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-root-port'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target chassis='25' port='0x28'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.25'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model name='pcie-pci-bridge'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='pci.26'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='usb' index='0' model='piix3-uhci'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='usb'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <controller type='sata' index='0'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='ide'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </controller>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <interface type='ethernet'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <mac address='fa:16:3e:d9:1b:2f'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target dev='tap55484b13-54'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model type='virtio'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <driver name='vhost' rx_queue_size='512'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <mtu size='1442'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='net0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </interface>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <serial type='pty'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <source path='/dev/pts/0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <log file='/var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/console.log' append='off'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target type='isa-serial' port='0'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:         <model name='isa-serial'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       </target>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='serial0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </serial>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <console type='pty' tty='/dev/pts/0'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <source path='/dev/pts/0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <log file='/var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab/console.log' append='off'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <target type='serial' port='0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='serial0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </console>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <input type='tablet' bus='usb'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='input0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='usb' bus='0' port='1'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </input>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <input type='mouse' bus='ps2'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='input1'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </input>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <input type='keyboard' bus='ps2'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='input2'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </input>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <listen type='address' address='::0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </graphics>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <audio id='1' type='none'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <video>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <model type='virtio' heads='1' primary='yes'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='video0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </video>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <watchdog model='itco' action='reset'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='watchdog0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </watchdog>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <memballoon model='virtio'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <stats period='10'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='balloon0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </memballoon>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <rng model='virtio'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <backend model='random'>/dev/urandom</backend>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <alias name='rng0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </rng>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </devices>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <label>system_u:system_r:svirt_t:s0:c477,c914</label>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <imagelabel>system_u:object_r:svirt_image_t:s0:c477,c914</imagelabel>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </seclabel>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <seclabel type='dynamic' model='dac' relabel='yes'>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <label>+107:+107</label>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <imagelabel>+107:+107</imagelabel>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </seclabel>
Oct 09 10:00:49 compute-2 nova_compute[163961]: </domain>
Oct 09 10:00:49 compute-2 nova_compute[163961]:  get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.462 2 INFO nova.virt.libvirt.driver [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully detached device tap8bfb9190-a4 from instance c7c7e2ca-e694-465f-941e-15513c7e91ab from the live domain config.
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.463 2 DEBUG nova.virt.libvirt.vif [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:59:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1164663661',display_name='tempest-TestNetworkBasicOps-server-1164663661',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1164663661',id=6,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJekZCUuyZFfRi4sqQ/mP7Ozivo49QKXFHHjMUzJNdIpXHKQgOnPPcVpZjnx45IP0IUXYjxjP4OCv7gqvDPFNQ0nZIMIyF69sokT4DnjnPbGTb16o+q+6RbNVaDlRNZ6mw==',key_name='tempest-TestNetworkBasicOps-1319761674',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:59:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-mxb6tzm8',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:59:11Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=c7c7e2ca-e694-465f-941e-15513c7e91ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.463 2 DEBUG nova.network.os_vif_util [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "8bfb9190-a455-483f-a18f-f65db3220f30", "address": "fa:16:3e:79:80:50", "network": {"id": "4f792301-cf2d-455d-8ad6-8a55cc3146e9", "bridge": "br-int", "label": "tempest-network-smoke--166756604", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.28", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bfb9190-a4", "ovs_interfaceid": "8bfb9190-a455-483f-a18f-f65db3220f30", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.463 2 DEBUG nova.network.os_vif_util [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:79:80:50,bridge_name='br-int',has_traffic_filtering=True,id=8bfb9190-a455-483f-a18f-f65db3220f30,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bfb9190-a4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.464 2 DEBUG os_vif [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:79:80:50,bridge_name='br-int',has_traffic_filtering=True,id=8bfb9190-a455-483f-a18f-f65db3220f30,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bfb9190-a4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.466 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8bfb9190-a4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.477 2 INFO os_vif [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:79:80:50,bridge_name='br-int',has_traffic_filtering=True,id=8bfb9190-a455-483f-a18f-f65db3220f30,network=Network(4f792301-cf2d-455d-8ad6-8a55cc3146e9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bfb9190-a4')
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.477 2 DEBUG nova.virt.libvirt.guest [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <nova:name>tempest-TestNetworkBasicOps-server-1164663661</nova:name>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <nova:creationTime>2025-10-09 10:00:49</nova:creationTime>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <nova:flavor name="m1.nano">
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:memory>128</nova:memory>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:disk>1</nova:disk>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:swap>0</nova:swap>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:ephemeral>0</nova:ephemeral>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:vcpus>1</nova:vcpus>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </nova:flavor>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <nova:owner>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:user uuid="2351e05157514d1995a1ea4151d12fee">tempest-TestNetworkBasicOps-74406332-project-member</nova:user>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:project uuid="c69d102fb5504f48809f5fc47f1cb831">tempest-TestNetworkBasicOps-74406332</nova:project>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </nova:owner>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <nova:root type="image" uuid="9546778e-959c-466e-9bef-81ace5bd1cc5"/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   <nova:ports>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     <nova:port uuid="55484b13-541c-4895-beab-bdcdaa30f4fe">
Oct 09 10:00:49 compute-2 nova_compute[163961]:       <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct 09 10:00:49 compute-2 nova_compute[163961]:     </nova:port>
Oct 09 10:00:49 compute-2 nova_compute[163961]:   </nova:ports>
Oct 09 10:00:49 compute-2 nova_compute[163961]: </nova:instance>
Oct 09 10:00:49 compute-2 nova_compute[163961]:  set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359
Oct 09 10:00:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:49 compute-2 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169058]: [NOTICE]   (169062) : haproxy version is 2.8.14-c23fe91
Oct 09 10:00:49 compute-2 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169058]: [NOTICE]   (169062) : path to executable is /usr/sbin/haproxy
Oct 09 10:00:49 compute-2 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169058]: [WARNING]  (169062) : Exiting Master process...
Oct 09 10:00:49 compute-2 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169058]: [WARNING]  (169062) : Exiting Master process...
Oct 09 10:00:49 compute-2 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169058]: [ALERT]    (169062) : Current worker (169064) exited with code 143 (Terminated)
Oct 09 10:00:49 compute-2 neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9[169058]: [WARNING]  (169062) : All workers exited. Exiting... (0)
Oct 09 10:00:49 compute-2 systemd[1]: libpod-83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87.scope: Deactivated successfully.
Oct 09 10:00:49 compute-2 conmon[169058]: conmon 83c3e73ac0e8f23f4c80 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87.scope/container/memory.events
Oct 09 10:00:49 compute-2 podman[169575]: 2025-10-09 10:00:49.557984177 +0000 UTC m=+0.033531525 container died 83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct 09 10:00:49 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87-userdata-shm.mount: Deactivated successfully.
Oct 09 10:00:49 compute-2 systemd[1]: var-lib-containers-storage-overlay-3a3fd04afb0f5989930c0fda502bb4a65862b9bd9c1ecbb0d35d11811aaed28a-merged.mount: Deactivated successfully.
Oct 09 10:00:49 compute-2 podman[169575]: 2025-10-09 10:00:49.586299769 +0000 UTC m=+0.061847117 container cleanup 83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct 09 10:00:49 compute-2 systemd[1]: libpod-conmon-83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87.scope: Deactivated successfully.
Oct 09 10:00:49 compute-2 podman[169603]: 2025-10-09 10:00:49.625159779 +0000 UTC m=+0.024261600 container remove 83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct 09 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.629 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[1dde37e0-480d-4f60-a38e-a69fe2c49437]: (4, ('Thu Oct  9 10:00:49 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9 (83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87)\n83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87\nThu Oct  9 10:00:49 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9 (83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87)\n83c3e73ac0e8f23f4c8076a464f22921686d496d2994b008a328ec815a06cd87\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.630 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[5ca02b48-ccde-4395-8b93-6aa998dca7c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.631 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f792301-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:49 compute-2 kernel: tap4f792301-c0: left promiscuous mode
Oct 09 10:00:49 compute-2 nova_compute[163961]: 2025-10-09 10:00:49.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.649 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[d12dfc7a-5778-48dc-bcff-be6ead284c20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:49.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.663 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[5fbcd4a7-2f2d-4820-b7ec-579c7672a8bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.664 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[6b644519-8ea3-4565-b53f-53a5e7e9a935]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.676 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[c732f0ae-d817-4fdc-b763-c594731e21fa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 166223, 'reachable_time': 21749, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 169614, 'error': None, 'target': 'ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:49 compute-2 systemd[1]: run-netns-ovnmeta\x2d4f792301\x2dcf2d\x2d455d\x2d8ad6\x2d8a55cc3146e9.mount: Deactivated successfully.
Oct 09 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.683 72006 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4f792301-cf2d-455d-8ad6-8a55cc3146e9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 09 10:00:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:49.684 72006 DEBUG oslo.privsep.daemon [-] privsep: reply[a9b3a848-55df-4705-95c9-0a972cfadb5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:50 compute-2 nova_compute[163961]: 2025-10-09 10:00:50.107 2 DEBUG oslo_concurrency.lockutils [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 10:00:50 compute-2 nova_compute[163961]: 2025-10-09 10:00:50.107 2 DEBUG oslo_concurrency.lockutils [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquired lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 10:00:50 compute-2 nova_compute[163961]: 2025-10-09 10:00:50.108 2 DEBUG nova.network.neutron [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Oct 09 10:00:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:00:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:00:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:50.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:00:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:00:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:00:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:00:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:00:51 compute-2 ceph-mon[5983]: pgmap v832: 337 pgs: 337 active+clean; 200 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 2.7 KiB/s wr, 1 op/s
Oct 09 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.225 2 INFO nova.network.neutron [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Port 8bfb9190-a455-483f-a18f-f65db3220f30 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Oct 09 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.225 2 DEBUG nova.network.neutron [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updating instance_info_cache with network_info: [{"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.237 2 DEBUG oslo_concurrency.lockutils [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Releasing lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.251 2 DEBUG oslo_concurrency.lockutils [None req-70e9c6a7-4ab4-4f14-b911-20745a231414 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "interface-c7c7e2ca-e694-465f-941e-15513c7e91ab-8bfb9190-a455-483f-a18f-f65db3220f30" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 1.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:00:51 compute-2 ovn_controller[62794]: 2025-10-09T10:00:51Z|00043|binding|INFO|Releasing lport 188102c6-f5ba-4733-92be-2659db7ae55a from this chassis (sb_readonly=0)
Oct 09 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.478 2 DEBUG nova.compute.manager [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-vif-unplugged-8bfb9190-a455-483f-a18f-f65db3220f30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.479 2 DEBUG oslo_concurrency.lockutils [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.479 2 DEBUG oslo_concurrency.lockutils [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.479 2 DEBUG oslo_concurrency.lockutils [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.479 2 DEBUG nova.compute.manager [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] No waiting events found dispatching network-vif-unplugged-8bfb9190-a455-483f-a18f-f65db3220f30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.480 2 WARNING nova.compute.manager [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received unexpected event network-vif-unplugged-8bfb9190-a455-483f-a18f-f65db3220f30 for instance with vm_state active and task_state None.
Oct 09 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.480 2 DEBUG nova.compute.manager [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-vif-plugged-8bfb9190-a455-483f-a18f-f65db3220f30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.480 2 DEBUG oslo_concurrency.lockutils [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.480 2 DEBUG oslo_concurrency.lockutils [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.480 2 DEBUG oslo_concurrency.lockutils [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.481 2 DEBUG nova.compute.manager [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] No waiting events found dispatching network-vif-plugged-8bfb9190-a455-483f-a18f-f65db3220f30 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.481 2 WARNING nova.compute.manager [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received unexpected event network-vif-plugged-8bfb9190-a455-483f-a18f-f65db3220f30 for instance with vm_state active and task_state None.
Oct 09 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.481 2 DEBUG nova.compute.manager [req-6c6b62ae-e4b5-4c0a-b9e6-3e2245a047c1 req-be16c10c-5fcd-4201-9d5d-491269d2e1e9 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-vif-deleted-8bfb9190-a455-483f-a18f-f65db3220f30 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:00:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:51.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:51 compute-2 nova_compute[163961]: 2025-10-09 10:00:51.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:52 compute-2 podman[169618]: 2025-10-09 10:00:52.208274939 +0000 UTC m=+0.043897597 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.280 2 DEBUG oslo_concurrency.lockutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.280 2 DEBUG oslo_concurrency.lockutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.280 2 DEBUG oslo_concurrency.lockutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.281 2 DEBUG oslo_concurrency.lockutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.281 2 DEBUG oslo_concurrency.lockutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.281 2 INFO nova.compute.manager [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Terminating instance
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.282 2 DEBUG nova.compute.manager [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Oct 09 10:00:52 compute-2 kernel: tap55484b13-54 (unregistering): left promiscuous mode
Oct 09 10:00:52 compute-2 NetworkManager[984]: <info>  [1760004052.3177] device (tap55484b13-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct 09 10:00:52 compute-2 ovn_controller[62794]: 2025-10-09T10:00:52Z|00044|binding|INFO|Releasing lport 55484b13-541c-4895-beab-bdcdaa30f4fe from this chassis (sb_readonly=0)
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:52 compute-2 ovn_controller[62794]: 2025-10-09T10:00:52Z|00045|binding|INFO|Setting lport 55484b13-541c-4895-beab-bdcdaa30f4fe down in Southbound
Oct 09 10:00:52 compute-2 ovn_controller[62794]: 2025-10-09T10:00:52Z|00046|binding|INFO|Removing iface tap55484b13-54 ovn-installed in OVS
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.328 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d9:1b:2f 10.100.0.6'], port_security=['fa:16:3e:d9:1b:2f 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-2.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c7c7e2ca-e694-465f-941e-15513c7e91ab', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '4', 'neutron:security_group_ids': '72489230-c514-4cf9-bf1c-35e063204738', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-2.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed655dd9-bb73-453e-8a8b-a0dd965263b3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f38807e66d0>], logical_port=55484b13-541c-4895-beab-bdcdaa30f4fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f38807e66d0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.329 71793 INFO neutron.agent.ovn.metadata.agent [-] Port 55484b13-541c-4895-beab-bdcdaa30f4fe in datapath ab21f371-26e2-4c4f-bba0-3c44fb308723 unbound from our chassis
Oct 09 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.330 71793 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ab21f371-26e2-4c4f-bba0-3c44fb308723, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Oct 09 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.330 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[50d11188-3937-47b1-a6b8-ae0828875c4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.331 71793 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723 namespace which is not needed anymore
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:52 compute-2 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct 09 10:00:52 compute-2 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000006.scope: Consumed 14.845s CPU time.
Oct 09 10:00:52 compute-2 systemd-machined[121527]: Machine qemu-1-instance-00000006 terminated.
Oct 09 10:00:52 compute-2 neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723[168329]: [NOTICE]   (168333) : haproxy version is 2.8.14-c23fe91
Oct 09 10:00:52 compute-2 neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723[168329]: [NOTICE]   (168333) : path to executable is /usr/sbin/haproxy
Oct 09 10:00:52 compute-2 neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723[168329]: [WARNING]  (168333) : Exiting Master process...
Oct 09 10:00:52 compute-2 neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723[168329]: [WARNING]  (168333) : Exiting Master process...
Oct 09 10:00:52 compute-2 neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723[168329]: [ALERT]    (168333) : Current worker (168335) exited with code 143 (Terminated)
Oct 09 10:00:52 compute-2 neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723[168329]: [WARNING]  (168333) : All workers exited. Exiting... (0)
Oct 09 10:00:52 compute-2 systemd[1]: libpod-53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc.scope: Deactivated successfully.
Oct 09 10:00:52 compute-2 conmon[168329]: conmon 53e2b84d792e150b60f9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc.scope/container/memory.events
Oct 09 10:00:52 compute-2 podman[169654]: 2025-10-09 10:00:52.427100437 +0000 UTC m=+0.035464279 container died 53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct 09 10:00:52 compute-2 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc-userdata-shm.mount: Deactivated successfully.
Oct 09 10:00:52 compute-2 systemd[1]: var-lib-containers-storage-overlay-d3b78e34eb4bd48f615c51f92b1c60c1faa9ea89e7ed53520625d65534f9f4de-merged.mount: Deactivated successfully.
Oct 09 10:00:52 compute-2 podman[169654]: 2025-10-09 10:00:52.453453719 +0000 UTC m=+0.061817561 container cleanup 53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 09 10:00:52 compute-2 systemd[1]: libpod-conmon-53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc.scope: Deactivated successfully.
Oct 09 10:00:52 compute-2 sudo[169673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:00:52 compute-2 sudo[169673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:00:52 compute-2 sudo[169673]: pam_unix(sudo:session): session closed for user root
Oct 09 10:00:52 compute-2 NetworkManager[984]: <info>  [1760004052.4939] manager: (tap55484b13-54): new Tun device (/org/freedesktop/NetworkManager/Devices/37)
Oct 09 10:00:52 compute-2 podman[169700]: 2025-10-09 10:00:52.500853239 +0000 UTC m=+0.031428170 container remove 53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.507 2 INFO nova.virt.libvirt.driver [-] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Instance destroyed successfully.
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.507 2 DEBUG nova.objects.instance [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lazy-loading 'resources' on Instance uuid c7c7e2ca-e694-465f-941e-15513c7e91ab obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Oct 09 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.506 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[2d8e1c15-4948-4d34-b6c7-00a2c393ff80]: (4, ('Thu Oct  9 10:00:52 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723 (53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc)\n53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc\nThu Oct  9 10:00:52 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723 (53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc)\n53e2b84d792e150b60f96f5aa06be4c6e0154f4186fa1667dc36ef49bd3e50dc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.509 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[ee50347d-d9c1-40e9-b6a6-9aff64916f9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.510 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapab21f371-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:52 compute-2 kernel: tapab21f371-20: left promiscuous mode
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.521 2 DEBUG nova.virt.libvirt.vif [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-09T09:59:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1164663661',display_name='tempest-TestNetworkBasicOps-server-1164663661',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-2.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1164663661',id=6,image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJekZCUuyZFfRi4sqQ/mP7Ozivo49QKXFHHjMUzJNdIpXHKQgOnPPcVpZjnx45IP0IUXYjxjP4OCv7gqvDPFNQ0nZIMIyF69sokT4DnjnPbGTb16o+q+6RbNVaDlRNZ6mw==',key_name='tempest-TestNetworkBasicOps-1319761674',keypairs=<?>,launch_index=0,launched_at=2025-10-09T09:59:11Z,launched_on='compute-2.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-2.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c69d102fb5504f48809f5fc47f1cb831',ramdisk_id='',reservation_id='r-mxb6tzm8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='9546778e-959c-466e-9bef-81ace5bd1cc5',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-74406332',owner_user_name='tempest-TestNetworkBasicOps-74406332-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-09T09:59:11Z,user_data=None,user_id='2351e05157514d1995a1ea4151d12fee',uuid=c7c7e2ca-e694-465f-941e-15513c7e91ab,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.521 2 DEBUG nova.network.os_vif_util [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converting VIF {"id": "55484b13-541c-4895-beab-bdcdaa30f4fe", "address": "fa:16:3e:d9:1b:2f", "network": {"id": "ab21f371-26e2-4c4f-bba0-3c44fb308723", "bridge": "br-int", "label": "tempest-network-smoke--804551991", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c69d102fb5504f48809f5fc47f1cb831", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55484b13-54", "ovs_interfaceid": "55484b13-541c-4895-beab-bdcdaa30f4fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.522 2 DEBUG nova.network.os_vif_util [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:d9:1b:2f,bridge_name='br-int',has_traffic_filtering=True,id=55484b13-541c-4895-beab-bdcdaa30f4fe,network=Network(ab21f371-26e2-4c4f-bba0-3c44fb308723),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55484b13-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.522 2 DEBUG os_vif [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:1b:2f,bridge_name='br-int',has_traffic_filtering=True,id=55484b13-541c-4895-beab-bdcdaa30f4fe,network=Network(ab21f371-26e2-4c4f-bba0-3c44fb308723),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55484b13-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.523 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55484b13-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.531 2 INFO os_vif [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:d9:1b:2f,bridge_name='br-int',has_traffic_filtering=True,id=55484b13-541c-4895-beab-bdcdaa30f4fe,network=Network(ab21f371-26e2-4c4f-bba0-3c44fb308723),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55484b13-54')
Oct 09 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.532 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[74dcc56f-73c8-4b30-a23d-b38c721714e8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.547 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[cd658022-bd75-4693-a4b2-157c09d5ccfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.548 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[03b0b621-91a9-4623-9d7d-6dbf5b3cb341]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:52.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.561 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[d00236b6-2fed-45dc-a9d1-e137eef4fd22]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 163839, 'reachable_time': 20769, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 169747, 'error': None, 'target': 'ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:52 compute-2 systemd[1]: run-netns-ovnmeta\x2dab21f371\x2d26e2\x2d4c4f\x2dbba0\x2d3c44fb308723.mount: Deactivated successfully.
Oct 09 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.565 72006 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Oct 09 10:00:52 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:52.565 72006 DEBUG oslo.privsep.daemon [-] privsep: reply[57801d77-a591-482d-812f-991dac282980]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.687 2 INFO nova.virt.libvirt.driver [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Deleting instance files /var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab_del
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.688 2 INFO nova.virt.libvirt.driver [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Deletion of /var/lib/nova/instances/c7c7e2ca-e694-465f-941e-15513c7e91ab_del complete
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.725 2 DEBUG nova.virt.libvirt.host [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.725 2 INFO nova.virt.libvirt.host [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] UEFI support detected
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.726 2 INFO nova.compute.manager [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Took 0.44 seconds to destroy the instance on the hypervisor.
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.726 2 DEBUG oslo.service.loopingcall [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.727 2 DEBUG nova.compute.manager [-] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Oct 09 10:00:52 compute-2 nova_compute[163961]: 2025-10-09 10:00:52.727 2 DEBUG nova.network.neutron [-] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Oct 09 10:00:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 09 10:00:53 compute-2 ceph-mon[5983]: pgmap v833: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 4.8 KiB/s wr, 29 op/s
Oct 09 10:00:53 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/743125290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.243 2 DEBUG nova.network.neutron [-] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.265 2 INFO nova.compute.manager [-] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Took 0.54 seconds to deallocate network for instance.
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.299 2 DEBUG oslo_concurrency.lockutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.300 2 DEBUG oslo_concurrency.lockutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.407 2 DEBUG oslo_concurrency.processutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.482 2 DEBUG nova.compute.manager [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-changed-55484b13-541c-4895-beab-bdcdaa30f4fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.482 2 DEBUG nova.compute.manager [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Refreshing instance network info cache due to event network-changed-55484b13-541c-4895-beab-bdcdaa30f4fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.483 2 DEBUG oslo_concurrency.lockutils [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.483 2 DEBUG oslo_concurrency.lockutils [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquired lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.483 2 DEBUG nova.network.neutron [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Refreshing network info cache for port 55484b13-541c-4895-beab-bdcdaa30f4fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.581 2 DEBUG nova.network.neutron [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Oct 09 10:00:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:53.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:53 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:00:53 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1311931696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.758 2 DEBUG oslo_concurrency.processutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.762 2 DEBUG nova.compute.provider_tree [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.773 2 DEBUG nova.scheduler.client.report [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.790 2 DEBUG oslo_concurrency.lockutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.830 2 INFO nova.scheduler.client.report [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Deleted allocations for instance c7c7e2ca-e694-465f-941e-15513c7e91ab
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.870 2 DEBUG oslo_concurrency.lockutils [None req-065523a3-6765-4b45-98c7-09b4562a673e 2351e05157514d1995a1ea4151d12fee c69d102fb5504f48809f5fc47f1cb831 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:00:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.941 2 DEBUG nova.network.neutron [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.951 2 DEBUG oslo_concurrency.lockutils [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Releasing lock "refresh_cache-c7c7e2ca-e694-465f-941e-15513c7e91ab" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.951 2 DEBUG nova.compute.manager [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-vif-unplugged-55484b13-541c-4895-beab-bdcdaa30f4fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.952 2 DEBUG oslo_concurrency.lockutils [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.952 2 DEBUG oslo_concurrency.lockutils [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.952 2 DEBUG oslo_concurrency.lockutils [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.952 2 DEBUG nova.compute.manager [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] No waiting events found dispatching network-vif-unplugged-55484b13-541c-4895-beab-bdcdaa30f4fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.952 2 WARNING nova.compute.manager [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received unexpected event network-vif-unplugged-55484b13-541c-4895-beab-bdcdaa30f4fe for instance with vm_state deleted and task_state None.
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.953 2 DEBUG nova.compute.manager [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-vif-plugged-55484b13-541c-4895-beab-bdcdaa30f4fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.953 2 DEBUG oslo_concurrency.lockutils [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Acquiring lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.953 2 DEBUG oslo_concurrency.lockutils [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.953 2 DEBUG oslo_concurrency.lockutils [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] Lock "c7c7e2ca-e694-465f-941e-15513c7e91ab-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.953 2 DEBUG nova.compute.manager [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] No waiting events found dispatching network-vif-plugged-55484b13-541c-4895-beab-bdcdaa30f4fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.954 2 WARNING nova.compute.manager [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received unexpected event network-vif-plugged-55484b13-541c-4895-beab-bdcdaa30f4fe for instance with vm_state deleted and task_state None.
Oct 09 10:00:53 compute-2 nova_compute[163961]: 2025-10-09 10:00:53.954 2 DEBUG nova.compute.manager [req-1c53ba6f-d5ce-46e0-9f52-69078523dc1d req-4605a750-5520-4fd0-8ec9-cf04d18c7265 b902d789e48c45bb9a7509299f4a58c5 f3eb8344cfb74230931fa3e9a21913e4 - - default default] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Received event network-vif-deleted-55484b13-541c-4895-beab-bdcdaa30f4fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Oct 09 10:00:54 compute-2 nova_compute[163961]: 2025-10-09 10:00:54.183 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:54 compute-2 nova_compute[163961]: 2025-10-09 10:00:54.183 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3841926323' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1311931696' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:54 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:54.451 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:00:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:54.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:55 compute-2 nova_compute[163961]: 2025-10-09 10:00:55.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:55 compute-2 nova_compute[163961]: 2025-10-09 10:00:55.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 10:00:55 compute-2 nova_compute[163961]: 2025-10-09 10:00:55.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 10:00:55 compute-2 nova_compute[163961]: 2025-10-09 10:00:55.183 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 10:00:55 compute-2 nova_compute[163961]: 2025-10-09 10:00:55.183 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:55 compute-2 ceph-mon[5983]: pgmap v834: 337 pgs: 337 active+clean; 121 MiB data, 307 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.8 KiB/s wr, 29 op/s
Oct 09 10:00:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:55.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:55 compute-2 nova_compute[163961]: 2025-10-09 10:00:55.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:55 compute-2 nova_compute[163961]: 2025-10-09 10:00:55.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:00:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:00:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:00:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:00:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:00:56 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:56.180 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:77:89:5b'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '', 'neutron:device_id': 'ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ed655dd9-bb73-453e-8a8b-a0dd965263b3, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=188102c6-f5ba-4733-92be-2659db7ae55a) old=Port_Binding(mac=['fa:16:3e:77:89:5b 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ab21f371-26e2-4c4f-bba0-3c44fb308723', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c69d102fb5504f48809f5fc47f1cb831', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:00:56 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:56.181 71793 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 188102c6-f5ba-4733-92be-2659db7ae55a in datapath ab21f371-26e2-4c4f-bba0-3c44fb308723 updated
Oct 09 10:00:56 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:56.181 71793 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network ab21f371-26e2-4c4f-bba0-3c44fb308723 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599
Oct 09 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.182 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:56 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:00:56.182 168221 DEBUG oslo.privsep.daemon [-] privsep: reply[dfb95d57-869e-4d72-8c9f-c9781cbec431]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Oct 09 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.182 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.183 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.183 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.183 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.202 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.202 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.203 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.203 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.203 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:00:56 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3252183875' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:56 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:00:56 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2786803058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.550 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:00:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:00:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:56.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.754 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.755 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4994MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.755 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.756 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.793 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.794 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:56 compute-2 nova_compute[163961]: 2025-10-09 10:00:56.805 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:00:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:57 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:00:57 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2722941605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:57 compute-2 nova_compute[163961]: 2025-10-09 10:00:57.146 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:00:57 compute-2 nova_compute[163961]: 2025-10-09 10:00:57.150 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:00:57 compute-2 nova_compute[163961]: 2025-10-09 10:00:57.246 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:00:57 compute-2 ceph-mon[5983]: pgmap v835: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 7.0 KiB/s wr, 57 op/s
Oct 09 10:00:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3594715963' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2786803058' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2722941605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:00:57 compute-2 nova_compute[163961]: 2025-10-09 10:00:57.263 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 10:00:57 compute-2 nova_compute[163961]: 2025-10-09 10:00:57.263 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:00:57 compute-2 nova_compute[163961]: 2025-10-09 10:00:57.264 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:57 compute-2 nova_compute[163961]: 2025-10-09 10:00:57.264 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 09 10:00:57 compute-2 nova_compute[163961]: 2025-10-09 10:00:57.272 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 09 10:00:57 compute-2 nova_compute[163961]: 2025-10-09 10:00:57.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:00:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:57.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:58 compute-2 nova_compute[163961]: 2025-10-09 10:00:58.261 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:58 compute-2 nova_compute[163961]: 2025-10-09 10:00:58.262 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:00:58.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:59 compute-2 nova_compute[163961]: 2025-10-09 10:00:59.169 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:00:59 compute-2 ceph-mon[5983]: pgmap v836: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 5.3 KiB/s wr, 56 op/s
Oct 09 10:00:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:00:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:00:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:00:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:00:59.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:00:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:00:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:00:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:00:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:01:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:00.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:01:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:01:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:01:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:01:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:01:01 compute-2 CROND[169829]: (root) CMD (run-parts /etc/cron.hourly)
Oct 09 10:01:01 compute-2 run-parts[169832]: (/etc/cron.hourly) starting 0anacron
Oct 09 10:01:01 compute-2 anacron[169840]: Anacron started on 2025-10-09
Oct 09 10:01:01 compute-2 anacron[169840]: Will run job `cron.daily' in 16 min.
Oct 09 10:01:01 compute-2 anacron[169840]: Will run job `cron.weekly' in 36 min.
Oct 09 10:01:01 compute-2 anacron[169840]: Will run job `cron.monthly' in 56 min.
Oct 09 10:01:01 compute-2 anacron[169840]: Jobs will be executed sequentially
Oct 09 10:01:01 compute-2 run-parts[169842]: (/etc/cron.hourly) finished 0anacron
Oct 09 10:01:01 compute-2 CROND[169828]: (root) CMDEND (run-parts /etc/cron.hourly)
Oct 09 10:01:01 compute-2 ceph-mon[5983]: pgmap v837: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 5.3 KiB/s wr, 56 op/s
Oct 09 10:01:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:01.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:01 compute-2 nova_compute[163961]: 2025-10-09 10:01:01.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:02 compute-2 nova_compute[163961]: 2025-10-09 10:01:02.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:02.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:03 compute-2 podman[169845]: 2025-10-09 10:01:03.211509037 +0000 UTC m=+0.044255174 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 09 10:01:03 compute-2 ceph-mon[5983]: pgmap v838: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 5.3 KiB/s wr, 57 op/s
Oct 09 10:01:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:01:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:03.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:01:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:01:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:04.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:01:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:05 compute-2 podman[169864]: 2025-10-09 10:01:05.233885682 +0000 UTC m=+0.068182261 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct 09 10:01:05 compute-2 ceph-mon[5983]: pgmap v839: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Oct 09 10:01:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:01:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:01:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:05.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:01:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:01:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:01:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:01:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:01:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:06.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:06 compute-2 nova_compute[163961]: 2025-10-09 10:01:06.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:07 compute-2 ceph-mon[5983]: pgmap v840: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 29 op/s
Oct 09 10:01:07 compute-2 nova_compute[163961]: 2025-10-09 10:01:07.506 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1760004052.5048726, c7c7e2ca-e694-465f-941e-15513c7e91ab => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Oct 09 10:01:07 compute-2 nova_compute[163961]: 2025-10-09 10:01:07.506 2 INFO nova.compute.manager [-] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] VM Stopped (Lifecycle Event)
Oct 09 10:01:07 compute-2 nova_compute[163961]: 2025-10-09 10:01:07.520 2 DEBUG nova.compute.manager [None req-5263c0d5-61ed-4406-a3a0-445c32618551 - - - - - -] [instance: c7c7e2ca-e694-465f-941e-15513c7e91ab] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Oct 09 10:01:07 compute-2 nova_compute[163961]: 2025-10-09 10:01:07.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:01:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:07.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:01:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #49. Immutable memtables: 0.
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.308815) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 49
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068308865, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 723, "num_deletes": 251, "total_data_size": 1417510, "memory_usage": 1444272, "flush_reason": "Manual Compaction"}
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #50: started
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068312095, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 50, "file_size": 932885, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25337, "largest_seqno": 26055, "table_properties": {"data_size": 929346, "index_size": 1383, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8231, "raw_average_key_size": 19, "raw_value_size": 922210, "raw_average_value_size": 2190, "num_data_blocks": 61, "num_entries": 421, "num_filter_entries": 421, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004022, "oldest_key_time": 1760004022, "file_creation_time": 1760004068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 3301 microseconds, and 2165 cpu microseconds.
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312120) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #50: 932885 bytes OK
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312131) [db/memtable_list.cc:519] [default] Level-0 commit table #50 started
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312483) [db/memtable_list.cc:722] [default] Level-0 commit table #50: memtable #1 done
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312495) EVENT_LOG_v1 {"time_micros": 1760004068312492, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312504) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1413626, prev total WAL file size 1413626, number of live WAL files 2.
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000046.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312864) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [50(911KB)], [48(12MB)]
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068312882, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [50], "files_L6": [48], "score": -1, "input_data_size": 14464086, "oldest_snapshot_seqno": -1}
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #51: 5349 keys, 12339798 bytes, temperature: kUnknown
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068339374, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 51, "file_size": 12339798, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12304704, "index_size": 20648, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13381, "raw_key_size": 137381, "raw_average_key_size": 25, "raw_value_size": 12208121, "raw_average_value_size": 2282, "num_data_blocks": 837, "num_entries": 5349, "num_filter_entries": 5349, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760004068, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 51, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.339489) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12339798 bytes
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.345880) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 545.4 rd, 465.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 12.9 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(28.7) write-amplify(13.2) OK, records in: 5865, records dropped: 516 output_compression: NoCompression
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.345893) EVENT_LOG_v1 {"time_micros": 1760004068345888, "job": 28, "event": "compaction_finished", "compaction_time_micros": 26520, "compaction_time_cpu_micros": 17825, "output_level": 6, "num_output_files": 1, "total_output_size": 12339798, "num_input_records": 5865, "num_output_records": 5349, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068346042, "job": 28, "event": "table_file_deletion", "file_number": 50}
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000048.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004068347752, "job": 28, "event": "table_file_deletion", "file_number": 48}
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.312806) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.347772) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.347774) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.347776) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.347777) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:01:08 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:01:08.347778) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:01:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:01:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:08.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:01:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:09 compute-2 ceph-mon[5983]: pgmap v841: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:01:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2949658230' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:09.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:01:10.279 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:01:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:01:10.279 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:01:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:01:10.279 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:01:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:01:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:10.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:01:10 compute-2 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct 09 10:01:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:01:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:01:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:01:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:01:11 compute-2 ceph-mon[5983]: pgmap v842: 337 pgs: 337 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:01:11 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/4140615067' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:01:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:11.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:11 compute-2 nova_compute[163961]: 2025-10-09 10:01:11.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1742209419' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:01:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/2910271570' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:01:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/2910271570' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:01:12 compute-2 nova_compute[163961]: 2025-10-09 10:01:12.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:12 compute-2 sudo[169890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:01:12 compute-2 sudo[169890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:01:12 compute-2 sudo[169890]: pam_unix(sudo:session): session closed for user root
Oct 09 10:01:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:12.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:12 compute-2 podman[169914]: 2025-10-09 10:01:12.631404344 +0000 UTC m=+0.060333643 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 10:01:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:13 compute-2 ceph-mon[5983]: pgmap v843: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 10:01:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:01:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:13.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:01:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:14 compute-2 ceph-mon[5983]: pgmap v844: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 10:01:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:01:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:14.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:01:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:15.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:01:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:01:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:01:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:01:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:16.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:16 compute-2 nova_compute[163961]: 2025-10-09 10:01:16.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:17 compute-2 ceph-mon[5983]: pgmap v845: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 09 10:01:17 compute-2 nova_compute[163961]: 2025-10-09 10:01:17.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:17.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:18.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:19 compute-2 ceph-mon[5983]: pgmap v846: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 09 10:01:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:19.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:01:20 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2235098991' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:20.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:01:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:01:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:01:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:01:21 compute-2 ceph-mon[5983]: pgmap v847: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 09 10:01:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:21.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:21 compute-2 nova_compute[163961]: 2025-10-09 10:01:21.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:22 compute-2 nova_compute[163961]: 2025-10-09 10:01:22.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:22.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:23 compute-2 ceph-mon[5983]: pgmap v848: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct 09 10:01:23 compute-2 podman[169949]: 2025-10-09 10:01:23.201564653 +0000 UTC m=+0.036265860 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct 09 10:01:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:23.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:24.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:25 compute-2 ceph-mon[5983]: pgmap v849: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 09 10:01:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:25.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:01:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:01:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:01:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:01:26 compute-2 sudo[169969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:01:26 compute-2 sudo[169969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:01:26 compute-2 sudo[169969]: pam_unix(sudo:session): session closed for user root
Oct 09 10:01:26 compute-2 sudo[169994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 10:01:26 compute-2 sudo[169994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:01:26 compute-2 sudo[169994]: pam_unix(sudo:session): session closed for user root
Oct 09 10:01:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:01:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:26.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:01:26 compute-2 nova_compute[163961]: 2025-10-09 10:01:26.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:27 compute-2 ceph-mon[5983]: pgmap v850: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 09 10:01:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:01:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:01:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:01:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:01:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:01:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:01:27 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:01:27 compute-2 nova_compute[163961]: 2025-10-09 10:01:27.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:01:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:27.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:01:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:28 compute-2 ceph-mon[5983]: pgmap v851: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.3 KiB/s wr, 31 op/s
Oct 09 10:01:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:28.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:29 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1357456667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:29 compute-2 ovn_controller[62794]: 2025-10-09T10:01:29Z|00047|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct 09 10:01:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:29.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:30 compute-2 ceph-mon[5983]: pgmap v852: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.3 KiB/s wr, 31 op/s
Oct 09 10:01:30 compute-2 sudo[170053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:01:30 compute-2 sudo[170053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:01:30 compute-2 sudo[170053]: pam_unix(sudo:session): session closed for user root
Oct 09 10:01:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:30.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:01:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:01:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:01:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:01:31 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:01:31 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:01:31 compute-2 ceph-mon[5983]: pgmap v853: 337 pgs: 337 active+clean; 55 MiB data, 267 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 778 KiB/s wr, 44 op/s
Oct 09 10:01:31 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1905495853' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:01:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:31.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:31 compute-2 nova_compute[163961]: 2025-10-09 10:01:31.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:32 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/627844330' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:01:32 compute-2 nova_compute[163961]: 2025-10-09 10:01:32.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:32.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:32 compute-2 sudo[170080]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:01:32 compute-2 sudo[170080]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:01:32 compute-2 sudo[170080]: pam_unix(sudo:session): session closed for user root
Oct 09 10:01:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:33 compute-2 ceph-mon[5983]: pgmap v854: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Oct 09 10:01:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:33.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:34 compute-2 podman[170106]: 2025-10-09 10:01:34.204375559 +0000 UTC m=+0.039767583 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct 09 10:01:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:34.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:01:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:35 compute-2 ceph-mon[5983]: pgmap v855: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.0 MiB/s wr, 31 op/s
Oct 09 10:01:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:35.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:01:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:01:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:01:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:01:36 compute-2 podman[170124]: 2025-10-09 10:01:36.206605273 +0000 UTC m=+0.042309866 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 09 10:01:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:36.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:36 compute-2 nova_compute[163961]: 2025-10-09 10:01:36.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:37 compute-2 nova_compute[163961]: 2025-10-09 10:01:37.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:37 compute-2 ceph-mon[5983]: pgmap v856: 337 pgs: 337 active+clean; 67 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 126 op/s
Oct 09 10:01:37 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3366232617' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:37.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:38.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:39 compute-2 ceph-mon[5983]: pgmap v857: 337 pgs: 337 active+clean; 67 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 111 op/s
Oct 09 10:01:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:01:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:39.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:01:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:40.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:01:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:01:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:01:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:01:41 compute-2 ceph-mon[5983]: pgmap v858: 337 pgs: 337 active+clean; 53 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 123 op/s
Oct 09 10:01:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:01:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:41.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:01:41 compute-2 nova_compute[163961]: 2025-10-09 10:01:41.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:42 compute-2 nova_compute[163961]: 2025-10-09 10:01:42.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:42.600 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:42 compute-2 nova_compute[163961]: 2025-10-09 10:01:42.740 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:01:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:43 compute-2 podman[170148]: 2025-10-09 10:01:43.222383013 +0000 UTC m=+0.057436993 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct 09 10:01:43 compute-2 ceph-mon[5983]: pgmap v859: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.1 MiB/s wr, 116 op/s
Oct 09 10:01:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:43.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:01:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:44.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:01:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:45 compute-2 ceph-mon[5983]: pgmap v860: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 09 10:01:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:45.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:01:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:01:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:01:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:01:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:01:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:46.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:01:46 compute-2 nova_compute[163961]: 2025-10-09 10:01:46.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:47 compute-2 nova_compute[163961]: 2025-10-09 10:01:47.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:47 compute-2 ceph-mon[5983]: pgmap v861: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 101 op/s
Oct 09 10:01:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:47.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:47 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:01:47.877 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:01:47 compute-2 nova_compute[163961]: 2025-10-09 10:01:47.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:47 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:01:47.878 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 10:01:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:01:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:48.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:01:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:49 compute-2 ceph-mon[5983]: pgmap v862: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 938 B/s wr, 17 op/s
Oct 09 10:01:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:01:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:01:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:49.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:01:49 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:01:49.880 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:01:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:50.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:01:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:01:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:01:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:01:51 compute-2 ceph-mon[5983]: pgmap v863: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 938 B/s wr, 17 op/s
Oct 09 10:01:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:51.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:51 compute-2 nova_compute[163961]: 2025-10-09 10:01:51.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:52 compute-2 nova_compute[163961]: 2025-10-09 10:01:52.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:01:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:52.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:01:52 compute-2 sudo[170182]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:01:52 compute-2 sudo[170182]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:01:52 compute-2 sudo[170182]: pam_unix(sudo:session): session closed for user root
Oct 09 10:01:52 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/503740570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:53 compute-2 ceph-mon[5983]: pgmap v864: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 4.0 KiB/s rd, 597 B/s wr, 6 op/s
Oct 09 10:01:53 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/865184998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:53.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:54 compute-2 nova_compute[163961]: 2025-10-09 10:01:54.183 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:01:54 compute-2 podman[170208]: 2025-10-09 10:01:54.210463229 +0000 UTC m=+0.042985499 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct 09 10:01:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:54.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1811252629' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:55 compute-2 nova_compute[163961]: 2025-10-09 10:01:55.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:01:55 compute-2 ceph-mon[5983]: pgmap v865: 337 pgs: 337 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:01:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1985554974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:01:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2940217175' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3470198818' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:01:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:55.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:01:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:01:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:01:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:01:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.190 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.190 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.190 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.190 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.191 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:01:56 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:01:56 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3931842824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.528 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:01:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:56.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.706 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.707 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5039MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.708 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.708 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:01:56 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2633646493' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:56 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3931842824' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.753 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.753 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.765 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:01:56 compute-2 nova_compute[163961]: 2025-10-09 10:01:56.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:57 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:01:57 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/224353728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:57 compute-2 nova_compute[163961]: 2025-10-09 10:01:57.105 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:01:57 compute-2 nova_compute[163961]: 2025-10-09 10:01:57.108 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:01:57 compute-2 nova_compute[163961]: 2025-10-09 10:01:57.126 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:01:57 compute-2 nova_compute[163961]: 2025-10-09 10:01:57.127 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 10:01:57 compute-2 nova_compute[163961]: 2025-10-09 10:01:57.128 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.420s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:01:57 compute-2 nova_compute[163961]: 2025-10-09 10:01:57.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:01:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:57 compute-2 ceph-mon[5983]: pgmap v866: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 10:01:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/224353728' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:01:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:01:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:57.740 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:01:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:58 compute-2 nova_compute[163961]: 2025-10-09 10:01:58.128 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:01:58 compute-2 nova_compute[163961]: 2025-10-09 10:01:58.128 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 10:01:58 compute-2 nova_compute[163961]: 2025-10-09 10:01:58.128 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 10:01:58 compute-2 nova_compute[163961]: 2025-10-09 10:01:58.140 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 10:01:58 compute-2 nova_compute[163961]: 2025-10-09 10:01:58.140 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:01:58 compute-2 nova_compute[163961]: 2025-10-09 10:01:58.140 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 10:01:58 compute-2 nova_compute[163961]: 2025-10-09 10:01:58.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:01:58 compute-2 nova_compute[163961]: 2025-10-09 10:01:58.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:01:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:01:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:01:58.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:01:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:59 compute-2 nova_compute[163961]: 2025-10-09 10:01:59.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:01:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:01:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:01:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:01:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:01:59.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:01:59 compute-2 ceph-mon[5983]: pgmap v867: 337 pgs: 337 active+clean; 88 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct 09 10:01:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:01:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:01:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:01:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:00 compute-2 nova_compute[163961]: 2025-10-09 10:02:00.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:02:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:00.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:02:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:02:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:02:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:02:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:01.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:01 compute-2 ceph-mon[5983]: pgmap v868: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 376 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Oct 09 10:02:01 compute-2 nova_compute[163961]: 2025-10-09 10:02:01.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:02 compute-2 nova_compute[163961]: 2025-10-09 10:02:02.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:02.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:03.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:03 compute-2 ceph-mon[5983]: pgmap v869: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 09 10:02:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:04.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:02:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:05 compute-2 podman[170280]: 2025-10-09 10:02:05.207479189 +0000 UTC m=+0.038867715 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 10:02:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:05.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:05 compute-2 ceph-mon[5983]: pgmap v870: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 09 10:02:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:02:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:02:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:02:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:02:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:06.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:06 compute-2 nova_compute[163961]: 2025-10-09 10:02:06.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:07 compute-2 podman[170298]: 2025-10-09 10:02:07.200444468 +0000 UTC m=+0.036174288 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 09 10:02:07 compute-2 nova_compute[163961]: 2025-10-09 10:02:07.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:07.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:07 compute-2 ceph-mon[5983]: pgmap v871: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 09 10:02:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:08.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:09.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:09 compute-2 ceph-mon[5983]: pgmap v872: 337 pgs: 337 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Oct 09 10:02:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:02:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:02:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:02:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:02:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:02:10.280 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:02:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:02:10.280 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:02:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:02:10.280 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:02:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:10.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:11.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:11 compute-2 ceph-mon[5983]: pgmap v873: 337 pgs: 337 active+clean; 89 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 148 KiB/s wr, 93 op/s
Oct 09 10:02:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 09 10:02:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4098107868' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:02:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 09 10:02:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4098107868' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:02:11 compute-2 nova_compute[163961]: 2025-10-09 10:02:11.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:12 compute-2 nova_compute[163961]: 2025-10-09 10:02:12.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:12.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:12 compute-2 sudo[170321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:02:12 compute-2 sudo[170321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:02:12 compute-2 sudo[170321]: pam_unix(sudo:session): session closed for user root
Oct 09 10:02:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/4098107868' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:02:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/4098107868' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:02:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:13.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:13 compute-2 ceph-mon[5983]: pgmap v874: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.1 MiB/s wr, 116 op/s
Oct 09 10:02:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:14 compute-2 podman[170347]: 2025-10-09 10:02:14.214075784 +0000 UTC m=+0.051134932 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 10:02:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:14.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:02:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:02:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:02:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:02:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:15.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:15 compute-2 ceph-mon[5983]: pgmap v875: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 09 10:02:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:16.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:16 compute-2 nova_compute[163961]: 2025-10-09 10:02:16.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:17 compute-2 nova_compute[163961]: 2025-10-09 10:02:17.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:17.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:17 compute-2 ceph-mon[5983]: pgmap v876: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 09 10:02:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:18.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:02:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:02:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:02:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:02:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:19.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:19 compute-2 ceph-mon[5983]: pgmap v877: 337 pgs: 337 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 289 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct 09 10:02:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:02:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:20.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:21.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:21 compute-2 nova_compute[163961]: 2025-10-09 10:02:21.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:21 compute-2 ceph-mon[5983]: pgmap v878: 337 pgs: 337 active+clean; 114 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 304 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Oct 09 10:02:21 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1295465010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:22 compute-2 nova_compute[163961]: 2025-10-09 10:02:22.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:22.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:23.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:23 compute-2 ceph-mon[5983]: pgmap v879: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 222 KiB/s rd, 2.0 MiB/s wr, 74 op/s
Oct 09 10:02:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:02:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:02:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:02:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:02:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:24.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:25 compute-2 podman[170382]: 2025-10-09 10:02:25.202565969 +0000 UTC m=+0.038216835 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct 09 10:02:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:25.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:25 compute-2 ceph-mon[5983]: pgmap v880: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 19 KiB/s wr, 29 op/s
Oct 09 10:02:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:26.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:26 compute-2 nova_compute[163961]: 2025-10-09 10:02:26.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:27 compute-2 nova_compute[163961]: 2025-10-09 10:02:27.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:27.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:27 compute-2 ceph-mon[5983]: pgmap v881: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 19 KiB/s wr, 30 op/s
Oct 09 10:02:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:28.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:02:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:02:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:02:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:02:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:29.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:29 compute-2 ceph-mon[5983]: pgmap v882: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Oct 09 10:02:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:30.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:30 compute-2 sudo[170405]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:02:30 compute-2 sudo[170405]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:02:30 compute-2 sudo[170405]: pam_unix(sudo:session): session closed for user root
Oct 09 10:02:30 compute-2 sudo[170430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 10:02:30 compute-2 sudo[170430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:02:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:31 compute-2 sudo[170430]: pam_unix(sudo:session): session closed for user root
Oct 09 10:02:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:31.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:31 compute-2 nova_compute[163961]: 2025-10-09 10:02:31.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:31 compute-2 ceph-mon[5983]: pgmap v883: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 29 op/s
Oct 09 10:02:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:32 compute-2 nova_compute[163961]: 2025-10-09 10:02:32.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:02:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:32.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:02:32 compute-2 sudo[170486]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:02:32 compute-2 sudo[170486]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:02:32 compute-2 sudo[170486]: pam_unix(sudo:session): session closed for user root
Oct 09 10:02:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:33.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:33 compute-2 ceph-mon[5983]: pgmap v884: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 5.3 KiB/s rd, 6.2 KiB/s wr, 10 op/s
Oct 09 10:02:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:02:33 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:02:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:02:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:02:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:02:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:02:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:34.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:02:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:02:34 compute-2 ceph-mon[5983]: pgmap v885: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:02:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:02:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:02:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:02:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:02:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:02:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:02:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:35.787 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:35 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/4131332810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:36 compute-2 podman[170514]: 2025-10-09 10:02:36.198428412 +0000 UTC m=+0.034889762 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct 09 10:02:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:36.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:36 compute-2 nova_compute[163961]: 2025-10-09 10:02:36.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:36 compute-2 ceph-mon[5983]: pgmap v886: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 1 op/s
Oct 09 10:02:37 compute-2 nova_compute[163961]: 2025-10-09 10:02:37.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:37.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:37 compute-2 sudo[170532]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:02:37 compute-2 sudo[170532]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:02:37 compute-2 sudo[170532]: pam_unix(sudo:session): session closed for user root
Oct 09 10:02:37 compute-2 podman[170556]: 2025-10-09 10:02:37.914726197 +0000 UTC m=+0.036617530 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 09 10:02:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:02:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:38.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:02:38 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:02:38 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:02:38 compute-2 ceph-mon[5983]: pgmap v887: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:02:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:02:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:02:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:02:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:02:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:39 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2502987184' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:02:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:39.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:40.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:40 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1572329529' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:02:40 compute-2 ceph-mon[5983]: pgmap v888: 337 pgs: 337 active+clean; 41 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:02:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:41.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:41 compute-2 nova_compute[163961]: 2025-10-09 10:02:41.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:42 compute-2 nova_compute[163961]: 2025-10-09 10:02:42.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:42.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:43 compute-2 ceph-mon[5983]: pgmap v889: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.9 MiB/s wr, 33 op/s
Oct 09 10:02:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:43.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:02:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:02:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:02:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:02:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:02:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:44.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:02:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:45 compute-2 ceph-mon[5983]: pgmap v890: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.9 MiB/s wr, 33 op/s
Oct 09 10:02:45 compute-2 podman[170581]: 2025-10-09 10:02:45.223412135 +0000 UTC m=+0.059063250 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible)
Oct 09 10:02:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:45.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:02:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:46.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:02:46 compute-2 nova_compute[163961]: 2025-10-09 10:02:46.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:47 compute-2 ceph-mon[5983]: pgmap v891: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 09 10:02:47 compute-2 nova_compute[163961]: 2025-10-09 10:02:47.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:47.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:48 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:02:48 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3602145734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:02:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:48.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:02:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:02:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:02:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:02:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:02:49 compute-2 ceph-mon[5983]: pgmap v892: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 09 10:02:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3602145734' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:02:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:49.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:02:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:02:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:50.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:51 compute-2 ceph-mon[5983]: pgmap v893: 337 pgs: 337 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 09 10:02:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:51.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:51 compute-2 nova_compute[163961]: 2025-10-09 10:02:51.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:52 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2931780533' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:02:52 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1952398429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:02:52 compute-2 nova_compute[163961]: 2025-10-09 10:02:52.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:52.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:52 compute-2 sudo[170612]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:02:52 compute-2 sudo[170612]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:02:52 compute-2 sudo[170612]: pam_unix(sudo:session): session closed for user root
Oct 09 10:02:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:53 compute-2 ceph-mon[5983]: pgmap v894: 337 pgs: 337 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 130 op/s
Oct 09 10:02:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:53.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:02:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:02:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:02:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:02:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:54.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:55 compute-2 ceph-mon[5983]: pgmap v895: 337 pgs: 337 active+clean; 134 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 98 op/s
Oct 09 10:02:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:02:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:55.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:02:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:56 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2548672515' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:56 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1152412393' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:56 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:02:56.156 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:56 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:02:56.157 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.186 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.186 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.186 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.187 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.187 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:02:56 compute-2 podman[170640]: 2025-10-09 10:02:56.208533077 +0000 UTC m=+0.039808666 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, container_name=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid)
Oct 09 10:02:56 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:02:56 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2725321656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:56 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:02:56 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2491592431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.530 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:02:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:56.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.718 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.719 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5032MB free_disk=59.92198944091797GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.720 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.720 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.766 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.766 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.782 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:02:56 compute-2 nova_compute[163961]: 2025-10-09 10:02:56.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:57 compute-2 ceph-mon[5983]: pgmap v896: 337 pgs: 337 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.9 MiB/s wr, 235 op/s
Oct 09 10:02:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/371860424' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2725321656' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2491592431' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:57 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:02:57 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2965295672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:57 compute-2 nova_compute[163961]: 2025-10-09 10:02:57.123 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:02:57 compute-2 nova_compute[163961]: 2025-10-09 10:02:57.127 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:02:57 compute-2 nova_compute[163961]: 2025-10-09 10:02:57.139 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:02:57 compute-2 nova_compute[163961]: 2025-10-09 10:02:57.140 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 10:02:57 compute-2 nova_compute[163961]: 2025-10-09 10:02:57.141 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.421s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:02:57 compute-2 nova_compute[163961]: 2025-10-09 10:02:57.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:02:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:57.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:58 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2965295672' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:02:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:02:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:02:58.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:02:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:02:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:02:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:02:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:02:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:02:59 compute-2 ceph-mon[5983]: pgmap v897: 337 pgs: 337 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 09 10:02:59 compute-2 nova_compute[163961]: 2025-10-09 10:02:59.142 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:02:59 compute-2 nova_compute[163961]: 2025-10-09 10:02:59.142 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 10:02:59 compute-2 nova_compute[163961]: 2025-10-09 10:02:59.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:02:59 compute-2 nova_compute[163961]: 2025-10-09 10:02:59.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 10:02:59 compute-2 nova_compute[163961]: 2025-10-09 10:02:59.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 10:02:59 compute-2 nova_compute[163961]: 2025-10-09 10:02:59.186 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 10:02:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:02:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:02:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:02:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:02:59.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:02:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:02:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:02:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:02:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:00 compute-2 nova_compute[163961]: 2025-10-09 10:03:00.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:03:00 compute-2 nova_compute[163961]: 2025-10-09 10:03:00.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:03:00 compute-2 nova_compute[163961]: 2025-10-09 10:03:00.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:03:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:00.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:01 compute-2 ceph-mon[5983]: pgmap v898: 337 pgs: 337 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 09 10:03:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:01.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:01 compute-2 nova_compute[163961]: 2025-10-09 10:03:01.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:02 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:03:02.159 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:03:02 compute-2 nova_compute[163961]: 2025-10-09 10:03:02.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:03:02 compute-2 nova_compute[163961]: 2025-10-09 10:03:02.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:02.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:03 compute-2 ceph-mon[5983]: pgmap v899: 337 pgs: 337 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct 09 10:03:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:03.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:03:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:03:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:03:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:03:04 compute-2 nova_compute[163961]: 2025-10-09 10:03:04.167 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:03:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:04.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:05 compute-2 ceph-mon[5983]: pgmap v900: 337 pgs: 337 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Oct 09 10:03:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:03:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:05.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:06.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:06 compute-2 nova_compute[163961]: 2025-10-09 10:03:06.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:07 compute-2 ceph-mon[5983]: pgmap v901: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 199 op/s
Oct 09 10:03:07 compute-2 podman[170714]: 2025-10-09 10:03:07.203298594 +0000 UTC m=+0.038252593 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct 09 10:03:07 compute-2 nova_compute[163961]: 2025-10-09 10:03:07.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:07.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:08 compute-2 podman[170731]: 2025-10-09 10:03:08.212350655 +0000 UTC m=+0.048850043 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct 09 10:03:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:08.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:03:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:03:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:03:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:03:09 compute-2 ceph-mon[5983]: pgmap v902: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 09 10:03:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:09.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:03:10.281 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:03:10.281 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:03:10.281 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:10.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:11 compute-2 ceph-mon[5983]: pgmap v903: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 09 10:03:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:11.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:11 compute-2 nova_compute[163961]: 2025-10-09 10:03:11.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/3114795455' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:03:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/3114795455' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:03:12 compute-2 nova_compute[163961]: 2025-10-09 10:03:12.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:12.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:12 compute-2 sudo[170753]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:03:12 compute-2 sudo[170753]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:03:12 compute-2 sudo[170753]: pam_unix(sudo:session): session closed for user root
Oct 09 10:03:13 compute-2 ceph-mon[5983]: pgmap v904: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.2 MiB/s wr, 63 op/s
Oct 09 10:03:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:13.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:03:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:03:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:03:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:03:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:14.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:15 compute-2 ceph-mon[5983]: pgmap v905: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 271 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct 09 10:03:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:15.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:16 compute-2 podman[170781]: 2025-10-09 10:03:16.228339284 +0000 UTC m=+0.060836883 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct 09 10:03:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:16.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:16 compute-2 nova_compute[163961]: 2025-10-09 10:03:16.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:17 compute-2 ceph-mon[5983]: pgmap v906: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 272 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct 09 10:03:17 compute-2 nova_compute[163961]: 2025-10-09 10:03:17.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:17.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:18.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:03:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:03:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:03:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:03:19 compute-2 ceph-mon[5983]: pgmap v907: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 1 op/s
Oct 09 10:03:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:19.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:03:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:20.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:21 compute-2 ceph-mon[5983]: pgmap v908: 337 pgs: 337 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 12 KiB/s wr, 1 op/s
Oct 09 10:03:21 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2378775170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:21 compute-2 nova_compute[163961]: 2025-10-09 10:03:21.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:21.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:22 compute-2 nova_compute[163961]: 2025-10-09 10:03:22.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:22.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:23 compute-2 ceph-mon[5983]: pgmap v909: 337 pgs: 337 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 20 KiB/s wr, 30 op/s
Oct 09 10:03:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:23.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:03:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:03:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:03:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:03:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:24.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:25 compute-2 ceph-mon[5983]: pgmap v910: 337 pgs: 337 active+clean; 121 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 9.2 KiB/s wr, 29 op/s
Oct 09 10:03:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:25.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:26 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2459753808' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:26.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:26 compute-2 nova_compute[163961]: 2025-10-09 10:03:26.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:27 compute-2 ceph-mon[5983]: pgmap v911: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 13 KiB/s wr, 58 op/s
Oct 09 10:03:27 compute-2 podman[170815]: 2025-10-09 10:03:27.226529639 +0000 UTC m=+0.052968357 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.license=GPLv2, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, tcib_managed=true)
Oct 09 10:03:27 compute-2 nova_compute[163961]: 2025-10-09 10:03:27.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:27.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:28.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:03:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:03:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:03:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:03:29 compute-2 ceph-mon[5983]: pgmap v912: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 12 KiB/s wr, 57 op/s
Oct 09 10:03:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:03:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:29.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:03:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:30.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:31 compute-2 ceph-mon[5983]: pgmap v913: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 12 KiB/s wr, 57 op/s
Oct 09 10:03:31 compute-2 nova_compute[163961]: 2025-10-09 10:03:31.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:03:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:31.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:03:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:32 compute-2 nova_compute[163961]: 2025-10-09 10:03:32.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:32.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:33 compute-2 sudo[170838]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:03:33 compute-2 sudo[170838]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:03:33 compute-2 sudo[170838]: pam_unix(sudo:session): session closed for user root
Oct 09 10:03:33 compute-2 ceph-mon[5983]: pgmap v914: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 12 KiB/s wr, 58 op/s
Oct 09 10:03:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:33.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:03:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:03:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:03:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:03:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:34.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:35 compute-2 ceph-mon[5983]: pgmap v915: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.5 KiB/s wr, 29 op/s
Oct 09 10:03:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:03:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:35.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:36.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:36 compute-2 nova_compute[163961]: 2025-10-09 10:03:36.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:37 compute-2 ceph-mon[5983]: pgmap v916: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 3.5 KiB/s wr, 29 op/s
Oct 09 10:03:37 compute-2 nova_compute[163961]: 2025-10-09 10:03:37.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:37.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:37 compute-2 sudo[170868]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:03:37 compute-2 sudo[170868]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:03:38 compute-2 sudo[170868]: pam_unix(sudo:session): session closed for user root
Oct 09 10:03:38 compute-2 sudo[170894]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 10:03:38 compute-2 sudo[170894]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:03:38 compute-2 podman[170892]: 2025-10-09 10:03:38.054966348 +0000 UTC m=+0.042511292 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct 09 10:03:38 compute-2 sudo[170894]: pam_unix(sudo:session): session closed for user root
Oct 09 10:03:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:38.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:03:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:03:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:03:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:03:39 compute-2 podman[170965]: 2025-10-09 10:03:39.246597094 +0000 UTC m=+0.081570354 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct 09 10:03:39 compute-2 ceph-mon[5983]: pgmap v917: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:03:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:03:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:03:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:03:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:03:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:03:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:03:39 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:03:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:39.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:40 compute-2 ceph-mon[5983]: pgmap v918: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 09 10:03:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:40.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:41 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/4048734863' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:41 compute-2 nova_compute[163961]: 2025-10-09 10:03:41.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:41.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:42 compute-2 ceph-mon[5983]: pgmap v919: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 09 10:03:42 compute-2 nova_compute[163961]: 2025-10-09 10:03:42.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:42 compute-2 sudo[170986]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:03:42 compute-2 sudo[170986]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:03:42 compute-2 sudo[170986]: pam_unix(sudo:session): session closed for user root
Oct 09 10:03:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:42.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:03:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:03:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:03:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:03:43 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:03:43 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:03:43 compute-2 ceph-mon[5983]: pgmap v920: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:03:43 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/4018809995' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:03:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:43.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:44 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/4110228176' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct 09 10:03:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:44.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:45 compute-2 ceph-mon[5983]: pgmap v921: 337 pgs: 337 active+clean; 41 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:03:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:45.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:46.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:46 compute-2 nova_compute[163961]: 2025-10-09 10:03:46.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:47 compute-2 podman[171015]: 2025-10-09 10:03:47.243667212 +0000 UTC m=+0.068110247 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 09 10:03:47 compute-2 nova_compute[163961]: 2025-10-09 10:03:47.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:47 compute-2 ceph-mon[5983]: pgmap v922: 337 pgs: 337 active+clean; 88 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.0 MiB/s wr, 104 op/s
Oct 09 10:03:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:47.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:03:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:03:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:03:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:03:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:48.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:49 compute-2 ceph-mon[5983]: pgmap v923: 337 pgs: 337 active+clean; 88 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.0 MiB/s wr, 104 op/s
Oct 09 10:03:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:03:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:49.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:03:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:50.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:03:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:51 compute-2 ceph-mon[5983]: pgmap v924: 337 pgs: 337 active+clean; 88 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 91 op/s
Oct 09 10:03:51 compute-2 nova_compute[163961]: 2025-10-09 10:03:51.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:51.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:03:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:03:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:03:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:03:52 compute-2 nova_compute[163961]: 2025-10-09 10:03:52.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:03:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:52.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:03:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:53 compute-2 sudo[171045]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:03:53 compute-2 sudo[171045]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:03:53 compute-2 sudo[171045]: pam_unix(sudo:session): session closed for user root
Oct 09 10:03:53 compute-2 ceph-mon[5983]: pgmap v925: 337 pgs: 337 active+clean; 88 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Oct 09 10:03:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:53.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:54 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 09 10:03:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:03:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:54.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:03:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:55 compute-2 ceph-mon[5983]: pgmap v926: 337 pgs: 337 active+clean; 88 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct 09 10:03:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/4083760399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:03:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:55.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:03:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.188 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.188 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.188 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.188 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.188 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:03:56 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:03:56 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/604600001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.534 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:03:56 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/139525667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:56 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/604600001' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:56 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/325562166' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:56.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.748 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.749 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=5012MB free_disk=59.96738052368164GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.749 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.749 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.790 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.791 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.802 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:03:56 compute-2 nova_compute[163961]: 2025-10-09 10:03:56.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:03:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:03:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:03:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:03:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:03:57 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:03:57 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2413348394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:57 compute-2 nova_compute[163961]: 2025-10-09 10:03:57.161 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.358s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:03:57 compute-2 nova_compute[163961]: 2025-10-09 10:03:57.164 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:03:57 compute-2 nova_compute[163961]: 2025-10-09 10:03:57.175 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:03:57 compute-2 nova_compute[163961]: 2025-10-09 10:03:57.176 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 10:03:57 compute-2 nova_compute[163961]: 2025-10-09 10:03:57.176 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:03:57 compute-2 nova_compute[163961]: 2025-10-09 10:03:57.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:03:57 compute-2 ceph-mon[5983]: pgmap v927: 337 pgs: 337 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Oct 09 10:03:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2413348394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/526253334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:03:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:57.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:58 compute-2 podman[171119]: 2025-10-09 10:03:58.218725651 +0000 UTC m=+0.048903464 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct 09 10:03:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:03:58.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:59 compute-2 nova_compute[163961]: 2025-10-09 10:03:59.177 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:03:59 compute-2 nova_compute[163961]: 2025-10-09 10:03:59.177 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 10:03:59 compute-2 nova_compute[163961]: 2025-10-09 10:03:59.177 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 10:03:59 compute-2 nova_compute[163961]: 2025-10-09 10:03:59.187 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 10:03:59 compute-2 nova_compute[163961]: 2025-10-09 10:03:59.188 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:03:59 compute-2 nova_compute[163961]: 2025-10-09 10:03:59.188 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:03:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:03:59 compute-2 ceph-mon[5983]: pgmap v928: 337 pgs: 337 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 619 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Oct 09 10:03:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:03:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:03:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:03:59.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:03:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:03:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:03:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:03:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:00 compute-2 nova_compute[163961]: 2025-10-09 10:04:00.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:04:00 compute-2 nova_compute[163961]: 2025-10-09 10:04:00.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:04:00 compute-2 nova_compute[163961]: 2025-10-09 10:04:00.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:04:00 compute-2 nova_compute[163961]: 2025-10-09 10:04:00.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 10:04:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:00.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:01 compute-2 ceph-mon[5983]: pgmap v929: 337 pgs: 337 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 619 KiB/s rd, 2.1 MiB/s wr, 74 op/s
Oct 09 10:04:01 compute-2 nova_compute[163961]: 2025-10-09 10:04:01.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:01.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:04:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:04:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:04:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:04:02 compute-2 nova_compute[163961]: 2025-10-09 10:04:02.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:04:02 compute-2 nova_compute[163961]: 2025-10-09 10:04:02.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:02.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:03 compute-2 ceph-mon[5983]: pgmap v930: 337 pgs: 337 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 619 KiB/s rd, 2.1 MiB/s wr, 75 op/s
Oct 09 10:04:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:03.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:04 compute-2 nova_compute[163961]: 2025-10-09 10:04:04.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:04:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:04:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:04.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:05 compute-2 nova_compute[163961]: 2025-10-09 10:04:05.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:05 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:04:05.002 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:04:05 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:04:05.002 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 10:04:05 compute-2 ceph-mon[5983]: pgmap v931: 337 pgs: 337 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct 09 10:04:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:05.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:06.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:06 compute-2 nova_compute[163961]: 2025-10-09 10:04:06.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:04:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:04:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:04:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:04:07 compute-2 nova_compute[163961]: 2025-10-09 10:04:07.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:07 compute-2 ceph-mon[5983]: pgmap v932: 337 pgs: 337 active+clean; 41 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct 09 10:04:07 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2428919325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:04:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:07.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:08 compute-2 podman[171147]: 2025-10-09 10:04:08.204371822 +0000 UTC m=+0.039237289 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct 09 10:04:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:08.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:09 compute-2 ceph-mon[5983]: pgmap v933: 337 pgs: 337 active+clean; 41 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Oct 09 10:04:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:09.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:10 compute-2 podman[171166]: 2025-10-09 10:04:10.202411815 +0000 UTC m=+0.036284672 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd)
Oct 09 10:04:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:04:10.282 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:04:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:04:10.282 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:04:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:04:10.283 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:04:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:10.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:11 compute-2 ceph-mon[5983]: pgmap v934: 337 pgs: 337 active+clean; 41 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Oct 09 10:04:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Oct 09 10:04:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/550343667' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:04:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Oct 09 10:04:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/550343667' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:04:11 compute-2 nova_compute[163961]: 2025-10-09 10:04:11.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:04:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:11.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:04:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:04:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:04:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:04:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:04:12 compute-2 nova_compute[163961]: 2025-10-09 10:04:12.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/550343667' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:04:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/550343667' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:04:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:12.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:13 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:04:13.004 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:04:13 compute-2 sudo[171186]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:04:13 compute-2 sudo[171186]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:04:13 compute-2 sudo[171186]: pam_unix(sudo:session): session closed for user root
Oct 09 10:04:13 compute-2 ceph-mon[5983]: pgmap v935: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 13 KiB/s wr, 29 op/s
Oct 09 10:04:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:13.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:14.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:15 compute-2 ceph-mon[5983]: pgmap v936: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct 09 10:04:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:15.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:16.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:16 compute-2 nova_compute[163961]: 2025-10-09 10:04:16.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:04:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:04:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:04:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:04:17 compute-2 nova_compute[163961]: 2025-10-09 10:04:17.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:17 compute-2 ceph-mon[5983]: pgmap v937: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 29 op/s
Oct 09 10:04:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:17.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:18 compute-2 podman[171216]: 2025-10-09 10:04:18.220600961 +0000 UTC m=+0.055982621 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 09 10:04:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:18.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:19 compute-2 ceph-mon[5983]: pgmap v938: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:04:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:04:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:19.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:20.758 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:21 compute-2 ceph-mon[5983]: pgmap v939: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:04:21 compute-2 nova_compute[163961]: 2025-10-09 10:04:21.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:21.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:04:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:04:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:04:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:04:22 compute-2 nova_compute[163961]: 2025-10-09 10:04:22.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:22.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:23 compute-2 ceph-mon[5983]: pgmap v940: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:04:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:04:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:23.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:04:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:24.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:25 compute-2 ceph-mon[5983]: pgmap v941: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:04:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:25.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:04:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:26.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:04:26 compute-2 nova_compute[163961]: 2025-10-09 10:04:26.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:04:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:04:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:04:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:04:27 compute-2 nova_compute[163961]: 2025-10-09 10:04:27.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:27 compute-2 ceph-mon[5983]: pgmap v942: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:04:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:27.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:28.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:29 compute-2 podman[171250]: 2025-10-09 10:04:29.220178081 +0000 UTC m=+0.050791785 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct 09 10:04:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:29 compute-2 ceph-mon[5983]: pgmap v943: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:04:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:29.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:30.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:31 compute-2 ceph-mon[5983]: pgmap v944: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:04:31 compute-2 nova_compute[163961]: 2025-10-09 10:04:31.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:31.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:04:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:04:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:04:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:04:32 compute-2 nova_compute[163961]: 2025-10-09 10:04:32.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:04:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:32.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:04:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:33 compute-2 sudo[171271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:04:33 compute-2 sudo[171271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:04:33 compute-2 sudo[171271]: pam_unix(sudo:session): session closed for user root
Oct 09 10:04:33 compute-2 ceph-mon[5983]: pgmap v945: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:04:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:33.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:34.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:04:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:35 compute-2 ceph-mon[5983]: pgmap v946: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #52. Immutable memtables: 0.
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.845092) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 52
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275845110, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2360, "num_deletes": 251, "total_data_size": 6086491, "memory_usage": 6183008, "flush_reason": "Manual Compaction"}
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #53: started
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275853889, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 53, "file_size": 3946077, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26060, "largest_seqno": 28415, "table_properties": {"data_size": 3936844, "index_size": 5727, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19559, "raw_average_key_size": 20, "raw_value_size": 3918125, "raw_average_value_size": 4051, "num_data_blocks": 252, "num_entries": 967, "num_filter_entries": 967, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004069, "oldest_key_time": 1760004069, "file_creation_time": 1760004275, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 8827 microseconds, and 5918 cpu microseconds.
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.853917) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #53: 3946077 bytes OK
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.853932) [db/memtable_list.cc:519] [default] Level-0 commit table #53 started
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.854301) [db/memtable_list.cc:722] [default] Level-0 commit table #53: memtable #1 done
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.854313) EVENT_LOG_v1 {"time_micros": 1760004275854309, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.854325) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 6076116, prev total WAL file size 6076116, number of live WAL files 2.
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000049.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.855163) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [53(3853KB)], [51(11MB)]
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275855192, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [53], "files_L6": [51], "score": -1, "input_data_size": 16285875, "oldest_snapshot_seqno": -1}
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #54: 5800 keys, 14126839 bytes, temperature: kUnknown
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275894185, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 54, "file_size": 14126839, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14087618, "index_size": 23623, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14533, "raw_key_size": 147428, "raw_average_key_size": 25, "raw_value_size": 13982218, "raw_average_value_size": 2410, "num_data_blocks": 962, "num_entries": 5800, "num_filter_entries": 5800, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760004275, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 54, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.894507) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14126839 bytes
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.895292) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 416.0 rd, 360.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 11.8 +0.0 blob) out(13.5 +0.0 blob), read-write-amplify(7.7) write-amplify(3.6) OK, records in: 6316, records dropped: 516 output_compression: NoCompression
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.895312) EVENT_LOG_v1 {"time_micros": 1760004275895301, "job": 30, "event": "compaction_finished", "compaction_time_micros": 39153, "compaction_time_cpu_micros": 20367, "output_level": 6, "num_output_files": 1, "total_output_size": 14126839, "num_input_records": 6316, "num_output_records": 5800, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275896520, "job": 30, "event": "table_file_deletion", "file_number": 53}
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000051.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004275898425, "job": 30, "event": "table_file_deletion", "file_number": 51}
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.855123) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.898553) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.898557) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.898559) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.898560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:04:35 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:04:35.898562) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:04:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:35.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:36.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:36 compute-2 nova_compute[163961]: 2025-10-09 10:04:36.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:04:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:04:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:04:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:04:37 compute-2 nova_compute[163961]: 2025-10-09 10:04:37.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:37 compute-2 ceph-mon[5983]: pgmap v947: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:04:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:37.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:38.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:39 compute-2 podman[171302]: 2025-10-09 10:04:39.206390727 +0000 UTC m=+0.038663903 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct 09 10:04:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:39 compute-2 ceph-mon[5983]: pgmap v948: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:04:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:04:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:39.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:04:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:40.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:41 compute-2 podman[171321]: 2025-10-09 10:04:41.205476816 +0000 UTC m=+0.041339126 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct 09 10:04:41 compute-2 ceph-mon[5983]: pgmap v949: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:04:41 compute-2 nova_compute[163961]: 2025-10-09 10:04:41.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:41.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:04:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:04:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:04:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:04:42 compute-2 nova_compute[163961]: 2025-10-09 10:04:42.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:42 compute-2 sudo[171340]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:04:42 compute-2 sudo[171340]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:04:42 compute-2 sudo[171340]: pam_unix(sudo:session): session closed for user root
Oct 09 10:04:42 compute-2 sudo[171365]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 10:04:42 compute-2 sudo[171365]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:04:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:42.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:43 compute-2 sudo[171365]: pam_unix(sudo:session): session closed for user root
Oct 09 10:04:43 compute-2 ceph-mon[5983]: pgmap v950: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:04:43 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:04:43 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:04:43 compute-2 ceph-mon[5983]: pgmap v951: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:04:43 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:04:43 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:04:43 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:04:43 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:04:43 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:04:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:04:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:43.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:04:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:04:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:44.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:04:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:45.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:46 compute-2 ceph-mon[5983]: pgmap v952: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:04:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:46.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:46 compute-2 nova_compute[163961]: 2025-10-09 10:04:46.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:46 compute-2 sudo[171423]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:04:46 compute-2 sudo[171423]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:04:46 compute-2 sudo[171423]: pam_unix(sudo:session): session closed for user root
Oct 09 10:04:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:04:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:04:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:04:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:04:47 compute-2 nova_compute[163961]: 2025-10-09 10:04:47.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:47 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:04:47 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:04:47 compute-2 ceph-mon[5983]: pgmap v953: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:04:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:47.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:48.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:49 compute-2 podman[171450]: 2025-10-09 10:04:49.22434645 +0000 UTC m=+0.058230454 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true)
Oct 09 10:04:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:49.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:50 compute-2 ceph-mon[5983]: pgmap v954: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:04:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:04:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:04:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:50.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:04:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:51 compute-2 nova_compute[163961]: 2025-10-09 10:04:51.883 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:51.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:04:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:04:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:04:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:04:52 compute-2 ceph-mon[5983]: pgmap v955: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:04:52 compute-2 nova_compute[163961]: 2025-10-09 10:04:52.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:52.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:53 compute-2 sudo[171477]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:04:53 compute-2 sudo[171477]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:04:53 compute-2 sudo[171477]: pam_unix(sudo:session): session closed for user root
Oct 09 10:04:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:53.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:54 compute-2 sshd-session[171503]: Accepted publickey for zuul from 192.168.122.10 port 53886 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 10:04:54 compute-2 systemd[1]: Created slice User Slice of UID 1000.
Oct 09 10:04:54 compute-2 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 09 10:04:54 compute-2 systemd-logind[800]: New session 40 of user zuul.
Oct 09 10:04:54 compute-2 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 09 10:04:54 compute-2 systemd[1]: Starting User Manager for UID 1000...
Oct 09 10:04:54 compute-2 systemd[171508]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 10:04:54 compute-2 systemd[171508]: Queued start job for default target Main User Target.
Oct 09 10:04:54 compute-2 systemd[171508]: Created slice User Application Slice.
Oct 09 10:04:54 compute-2 systemd[171508]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 09 10:04:54 compute-2 systemd[171508]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 10:04:54 compute-2 systemd[171508]: Reached target Paths.
Oct 09 10:04:54 compute-2 systemd[171508]: Reached target Timers.
Oct 09 10:04:54 compute-2 systemd[171508]: Starting D-Bus User Message Bus Socket...
Oct 09 10:04:54 compute-2 systemd[171508]: Starting Create User's Volatile Files and Directories...
Oct 09 10:04:54 compute-2 ceph-mon[5983]: pgmap v956: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:04:54 compute-2 systemd[171508]: Listening on D-Bus User Message Bus Socket.
Oct 09 10:04:54 compute-2 systemd[171508]: Finished Create User's Volatile Files and Directories.
Oct 09 10:04:54 compute-2 systemd[171508]: Reached target Sockets.
Oct 09 10:04:54 compute-2 systemd[171508]: Reached target Basic System.
Oct 09 10:04:54 compute-2 systemd[171508]: Reached target Main User Target.
Oct 09 10:04:54 compute-2 systemd[171508]: Startup finished in 98ms.
Oct 09 10:04:54 compute-2 systemd[1]: Started User Manager for UID 1000.
Oct 09 10:04:54 compute-2 systemd[1]: Started Session 40 of User zuul.
Oct 09 10:04:54 compute-2 sshd-session[171503]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 10:04:54 compute-2 sudo[171524]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 09 10:04:54 compute-2 sudo[171524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:04:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:04:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:54.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:04:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:55.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:56 compute-2 ceph-mon[5983]: pgmap v957: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:04:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:56.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:56 compute-2 nova_compute[163961]: 2025-10-09 10:04:56.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:04:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:04:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:04:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:04:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:04:57 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct 09 10:04:57 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/968507340' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:04:57 compute-2 ceph-mon[5983]: from='client.26359 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:04:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/4131648813' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:04:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/336862096' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:04:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/968507340' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:04:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/204166457' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:04:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/33619624' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:04:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3305344509' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:04:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/551610347' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:04:57 compute-2 nova_compute[163961]: 2025-10-09 10:04:57.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:04:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:57.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.190 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.190 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.190 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.191 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.191 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:04:58 compute-2 ceph-mon[5983]: from='client.26645 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:04:58 compute-2 ceph-mon[5983]: from='client.16800 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:04:58 compute-2 ceph-mon[5983]: from='client.26386 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:04:58 compute-2 ceph-mon[5983]: from='client.26663 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:04:58 compute-2 ceph-mon[5983]: from='client.16827 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:04:58 compute-2 ceph-mon[5983]: pgmap v958: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:04:58 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:04:58 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3527318548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.538 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.347s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.745 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.746 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4938MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.746 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.746 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:04:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:04:58.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.802 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.802 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.829 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing inventories for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 09 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.849 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating ProviderTree inventory for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 09 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.849 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating inventory in ProviderTree for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 09 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.864 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing aggregate associations for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 09 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.893 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing trait associations for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8, traits: HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,HW_CPU_X86_AVX512VAES,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 09 10:04:58 compute-2 nova_compute[163961]: 2025-10-09 10:04:58.910 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:04:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:04:59 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3280518930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:04:59 compute-2 nova_compute[163961]: 2025-10-09 10:04:59.250 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:04:59 compute-2 nova_compute[163961]: 2025-10-09 10:04:59.254 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:04:59 compute-2 nova_compute[163961]: 2025-10-09 10:04:59.269 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:04:59 compute-2 nova_compute[163961]: 2025-10-09 10:04:59.270 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 10:04:59 compute-2 nova_compute[163961]: 2025-10-09 10:04:59.270 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.524s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:04:59 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3527318548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:04:59 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3280518930' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:04:59 compute-2 podman[171840]: 2025-10-09 10:04:59.339744153 +0000 UTC m=+0.050071924 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2)
Oct 09 10:04:59 compute-2 ovs-vsctl[171866]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 09 10:04:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:04:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:04:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:04:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:04:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:04:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:04:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:04:59.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:00 compute-2 virtqemud[163507]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 09 10:05:00 compute-2 virtqemud[163507]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 09 10:05:00 compute-2 virtqemud[163507]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 09 10:05:00 compute-2 nova_compute[163961]: 2025-10-09 10:05:00.271 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:05:00 compute-2 nova_compute[163961]: 2025-10-09 10:05:00.271 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 10:05:00 compute-2 nova_compute[163961]: 2025-10-09 10:05:00.271 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 10:05:00 compute-2 nova_compute[163961]: 2025-10-09 10:05:00.302 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 10:05:00 compute-2 nova_compute[163961]: 2025-10-09 10:05:00.302 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:05:00 compute-2 nova_compute[163961]: 2025-10-09 10:05:00.302 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:05:00 compute-2 nova_compute[163961]: 2025-10-09 10:05:00.302 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 10:05:00 compute-2 ceph-mon[5983]: pgmap v959: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:00 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: cache status {prefix=cache status} (starting...)
Oct 09 10:05:00 compute-2 lvm[172157]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 10:05:00 compute-2 lvm[172157]: VG ceph_vg0 finished
Oct 09 10:05:00 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: client ls {prefix=client ls} (starting...)
Oct 09 10:05:00 compute-2 kernel: block loop3: the capability attribute has been deprecated.
Oct 09 10:05:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:00 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 10:05:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:00.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:00 compute-2 rsyslogd[1245]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct 09 10:05:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:01 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: damage ls {prefix=damage ls} (starting...)
Oct 09 10:05:01 compute-2 nova_compute[163961]: 2025-10-09 10:05:01.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:05:01 compute-2 nova_compute[163961]: 2025-10-09 10:05:01.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:05:01 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump loads {prefix=dump loads} (starting...)
Oct 09 10:05:01 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 09 10:05:01 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2670855265' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:05:01 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 09 10:05:01 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3065331311' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:05:01 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 09 10:05:01 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 09 10:05:01 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 09 10:05:01 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct 09 10:05:01 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3912654270' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 09 10:05:01 compute-2 nova_compute[163961]: 2025-10-09 10:05:01.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:01 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 09 10:05:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:05:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:01.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:05:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:05:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:05:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:05:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:02 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:05:02 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 09 10:05:02 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: ops {prefix=ops} (starting...)
Oct 09 10:05:02 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct 09 10:05:02 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/900548494' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 09 10:05:02 compute-2 ceph-mon[5983]: from='client.26440 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:02 compute-2 ceph-mon[5983]: from='client.26705 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:02 compute-2 ceph-mon[5983]: pgmap v960: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:05:02 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3065331311' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:05:02 compute-2 ceph-mon[5983]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:05:02 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1435948674' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:05:02 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1761942395' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:05:02 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3912654270' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 09 10:05:02 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3126869266' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:05:02 compute-2 ceph-mon[5983]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:05:02 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3069884087' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 09 10:05:02 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1937344101' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:05:02 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/900548494' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 09 10:05:02 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct 09 10:05:02 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3249911151' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 09 10:05:02 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct 09 10:05:02 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2389407953' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 09 10:05:02 compute-2 nova_compute[163961]: 2025-10-09 10:05:02.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:02 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: session ls {prefix=session ls} (starting...)
Oct 09 10:05:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:02.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:02 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: status {prefix=status} (starting...)
Oct 09 10:05:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:03 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 09 10:05:03 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1391361791' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 09 10:05:03 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1337245173' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: from='client.26461 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: from='client.26717 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: from='client.26479 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: from='client.26744 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: from='client.16899 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: from='client.26503 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: from='client.26765 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: from='client.26509 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3249911151' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2389407953' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/108986200' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3242075144' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2685102981' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3857343826' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3551404967' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1391361791' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1337245173' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1643024054' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/845671599' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 09 10:05:03 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Oct 09 10:05:03 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2101833220' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 09 10:05:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:05:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:03.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:05:04 compute-2 nova_compute[163961]: 2025-10-09 10:05:04.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:05:04 compute-2 nova_compute[163961]: 2025-10-09 10:05:04.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:05:04 compute-2 ceph-mon[5983]: from='client.26801 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:04 compute-2 ceph-mon[5983]: from='client.26548 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:04 compute-2 ceph-mon[5983]: from='client.26810 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:04 compute-2 ceph-mon[5983]: from='client.16971 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:04 compute-2 ceph-mon[5983]: from='client.16974 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:04 compute-2 ceph-mon[5983]: from='client.26831 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:04 compute-2 ceph-mon[5983]: pgmap v961: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:04 compute-2 ceph-mon[5983]: from='client.17007 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:04 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/406982026' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:05:04 compute-2 ceph-mon[5983]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:05:04 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2101833220' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 09 10:05:04 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2635017389' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:05:04 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2541611345' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:05:04 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/802223364' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:05:04 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1522824198' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 09 10:05:04 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1062635565' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:05:04 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1096112260' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:05:04 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3081418931' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:05:04 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1240712361' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:05:04 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3984711169' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 09 10:05:04 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/4057339861' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 09 10:05:04 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/668757568' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 09 10:05:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:04.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:05 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 09 10:05:05 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3772235470' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:05:05 compute-2 ceph-mon[5983]: from='client.17034 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:05 compute-2 ceph-mon[5983]: from='client.26644 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:05 compute-2 ceph-mon[5983]: from='client.26909 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:05 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/461285032' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:05:05 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/64288757' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 09 10:05:05 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3331225401' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 09 10:05:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:05:05 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3256037916' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:05:05 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3984257628' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:05:05 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2889519608' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 09 10:05:05 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/604418692' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:05:05 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3026430711' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 09 10:05:05 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3772235470' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:05:05 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3683240453' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 09 10:05:05 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2521577722' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:05:05 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/4002817725' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 09 10:05:05 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 09 10:05:05 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1482807627' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:05:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:28.183473+0000 osd.2 (osd.2) 67 : cluster [DBG] 5.8 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68591616 unmapped: 573440 heap: 69165056 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 74 handle_osd_map epochs [75,75], i have 75, src has [1,75]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.850441 1 0.000094
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.003642 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.004743 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.005029 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[61,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.942181 1 0.000090
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[61,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.004360 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[61,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.007383 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.001338005s) [0] async=[0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 40'1059 active pruub 157.714279175s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[61,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.007491 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[61,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.001278877s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.714279175s@ mbc={}] exit Reset 0.000090 1 0.000136
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.001278877s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.714279175s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.001278877s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.714279175s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.001278877s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.714279175s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.001278877s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.714279175s@ mbc={}] exit Start 0.000006 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.001278877s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.714279175s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75 pruub=15.000436783s) [0] async=[0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 40'1059 active pruub 157.713455200s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75 pruub=15.000324249s) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713455200s@ mbc={}] exit Reset 0.000162 1 0.000229
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75 pruub=15.000324249s) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713455200s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75 pruub=15.000324249s) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713455200s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75 pruub=15.000324249s) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713455200s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75 pruub=15.000324249s) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713455200s@ mbc={}] exit Start 0.000013 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75 pruub=15.000324249s) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713455200s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.903509 1 0.000266
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.004309 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.006738 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.006772 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.000190735s) [0] async=[0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 40'1059 active pruub 157.713485718s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.000123978s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713485718s@ mbc={}] exit Reset 0.000097 1 0.000167
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.000123978s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713485718s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.000123978s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713485718s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.000123978s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713485718s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.000123978s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713485718s@ mbc={}] exit Start 0.000040 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=15.000123978s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.713485718s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.981936 1 0.000073
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.005744 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.009033 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.009317 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=73) [0]/[2] async=[0] r=0 lpr=73 pi=[60,73)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.995488167s) [0] async=[0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 40'1059 active pruub 157.709838867s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.995354652s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.709838867s@ mbc={}] exit Reset 0.000211 1 0.000679
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.995354652s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.709838867s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.995354652s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.709838867s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.995354652s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.709838867s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.995354652s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.709838867s@ mbc={}] exit Start 0.000231 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 75 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75 pruub=14.995354652s) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.709838867s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 67)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:28.173010+0000 osd.2 (osd.2) 66 : cluster [DBG] 5.8 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:28.183473+0000 osd.2 (osd.2) 67 : cluster [DBG] 5.8 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:37:59.667273+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:29.199669+0000 osd.2 (osd.2) 68 : cluster [DBG] 5.12 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:29.210181+0000 osd.2 (osd.2) 69 : cluster [DBG] 5.12 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68599808 unmapped: 565248 heap: 69165056 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 69)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:29.199669+0000 osd.2 (osd.2) 68 : cluster [DBG] 5.12 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:29.210181+0000 osd.2 (osd.2) 69 : cluster [DBG] 5.12 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:00.667458+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:30.241340+0000 osd.2 (osd.2) 70 : cluster [DBG] 5.13 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:30.251918+0000 osd.2 (osd.2) 71 : cluster [DBG] 5.13 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.557827 6 0.000151
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.558294 6 0.000097
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.556706 6 0.000712
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.558587 6 0.000627
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=57) [2] r=0 lpr=57 crt=41'42 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 22.686045 54 0.000267
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=57) [2] r=0 lpr=57 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 22.697591 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=57) [2] r=0 lpr=57 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 23.675503 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=57) [2] r=0 lpr=57 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 23.675543 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=57) [2] r=0 lpr=57 crt=41'42 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76 pruub=9.311874390s) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 active pruub 153.583740234s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76 pruub=9.311841011s) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.583740234s@ mbc={}] exit Reset 0.000058 1 0.000105
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76 pruub=9.311841011s) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.583740234s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76 pruub=9.311841011s) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.583740234s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76 pruub=9.311841011s) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.583740234s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76 pruub=9.311841011s) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.583740234s@ mbc={}] exit Start 0.000006 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76 pruub=9.311841011s) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 153.583740234s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 18.845861 42 0.000512
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 18.848990 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 19.851812 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 19.851854 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.154195786s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 active pruub 157.426406860s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 18.847647 42 0.000395
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.153966904s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.426406860s@ mbc={}] exit Reset 0.000275 1 0.000411
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 18.849865 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 19.851681 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.153966904s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.426406860s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.153966904s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.426406860s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.153966904s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.426406860s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 19.851746 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.153966904s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.426406860s@ mbc={}] exit Start 0.000254 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.153966904s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.426406860s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.152080536s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 active pruub 157.424911499s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001640 2 0.000063
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.7( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.151949883s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424911499s@ mbc={}] exit Reset 0.000225 1 0.000588
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.151949883s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424911499s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.151949883s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424911499s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.151949883s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424911499s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.151949883s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424911499s@ mbc={}] exit Start 0.000008 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76 pruub=13.151949883s) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424911499s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002328 2 0.000068
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002202 2 0.000069
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.17( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 76 handle_osd_map epochs [76,76], i have 76, src has [1,76]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002296 2 0.000062
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.7( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 DELETING pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.061447 2 0.000456
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.7( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.063121 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.7( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.621055 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68681728 unmapped: 483328 heap: 69165056 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 DELETING pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.112869 2 0.000532
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.115282 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=6 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.673627 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.17( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 DELETING pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.134904 2 0.000355
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.17( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.137171 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 76 pg[10.17( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=5 ec=53/34 lis/c=73/60 les/c/f=74/61/0 sis=75) [0] r=-1 lpr=75 pi=[60,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.694222 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.154018 3 0.000357
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.154320 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000056 1 0.000078
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000006 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.154429 3 0.000043
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.154480 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=76) [0] r=-1 lpr=76 pi=[61,76)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000203 1 0.000255
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.155907 7 0.000065
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000200 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001603 2 0.000036
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000026 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000638 1 0.000042
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 77 handle_osd_map epochs [77,77], i have 77, src has [1,77]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000395 2 0.000609
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000011 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000002 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=-1 lpr=75 DELETING pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.171546 5 0.000137
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.173987 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=73/74 n=5 ec=53/34 lis/c=73/61 les/c/f=74/62/0 sis=75) [0] r=-1 lpr=75 pi=[61,75)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.732623 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] lb MIN local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=-1 lpr=76 DELETING pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.024252 1 0.000136
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] lb MIN local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.024988 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 77 pg[6.9( v 41'42 (0'0,41'42] lb MIN local-lis/les=57/58 n=1 ec=49/14 lis/c=57/57 les/c/f=58/58/0 sis=76) [1] r=-1 lpr=76 pi=[57,76)/1 crt=41'42 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.180932 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 71)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:30.241340+0000 osd.2 (osd.2) 70 : cluster [DBG] 5.13 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:30.251918+0000 osd.2 (osd.2) 71 : cluster [DBG] 5.13 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:01.667619+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:31.263894+0000 osd.2 (osd.2) 72 : cluster [DBG] 7.1d scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:31.274291+0000 osd.2 (osd.2) 73 : cluster [DBG] 7.1d scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46800
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68730880 unmapped: 1482752 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 638583 data_alloc: 218103808 data_used: 4096
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 77 handle_osd_map epochs [77,78], i have 77, src has [1,78]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.882714272s of 10.004929543s, submitted: 119
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001099 3 0.000083
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.002799 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001411 3 0.000045
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.001943 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 78 handle_osd_map epochs [78,78], i have 78, src has [1,78]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.002807 5 0.000274
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000485 1 0.000080
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.002964 5 0.000670
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000397 1 0.000023
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.133021 2 0.000066
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.133333 1 0.000036
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000564 1 0.000141
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038030 2 0.000058
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 78 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 73)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:31.263894+0000 osd.2 (osd.2) 72 : cluster [DBG] 7.1d scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:31.274291+0000 osd.2 (osd.2) 73 : cluster [DBG] 7.1d scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:02.667780+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 78 heartbeat osd_stat(store_statfs(0x4fcf38000/0x0/0x4ffc00000, data 0x82dee/0xf2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2bcf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68755456 unmapped: 1458176 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.864070 1 0.000301
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.001121 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.003948 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.003982 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001540184s) [0] async=[0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 40'1059 active pruub 161.432449341s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001320839s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432449341s@ mbc={}] exit Reset 0.000272 1 0.000353
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001320839s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432449341s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001320839s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432449341s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001320839s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432449341s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001320839s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432449341s@ mbc={}] exit Start 0.000119 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001320839s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432449341s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.825928 1 0.000102
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.001542 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.003736 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.004041 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=77) [0]/[2] async=[0] r=0 lpr=77 pi=[61,77)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.001205444s) [0] async=[0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 40'1059 active pruub 161.432983398s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.000682831s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432983398s@ mbc={}] exit Reset 0.000551 1 0.000986
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.000682831s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432983398s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.000682831s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432983398s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.000682831s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432983398s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.000682831s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432983398s@ mbc={}] exit Start 0.000370 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 79 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79 pruub=15.000682831s) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 161.432983398s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 79 handle_osd_map epochs [79,79], i have 79, src has [1,79]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:03.667901+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:33.260791+0000 osd.2 (osd.2) 74 : cluster [DBG] 10.11 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:33.289003+0000 osd.2 (osd.2) 75 : cluster [DBG] 10.11 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68796416 unmapped: 1417216 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _renew_subs
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.221642 6 0.000514
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.223026 6 0.000474
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001592 2 0.000056
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.9( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001551 2 0.000032
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.19( v 40'1059 (0'0,40'1059] local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.9( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 DELETING pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.071402 2 0.000291
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.9( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.073054 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.9( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=77/78 n=6 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.295164 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 75)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:33.260791+0000 osd.2 (osd.2) 74 : cluster [DBG] 10.11 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:33.289003+0000 osd.2 (osd.2) 75 : cluster [DBG] 10.11 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.19( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 DELETING pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.123095 2 0.000252
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.19( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.124757 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 80 pg[10.19( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=77/78 n=5 ec=53/34 lis/c=77/61 les/c/f=78/62/0 sis=79) [0] r=-1 lpr=79 pi=[61,79)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.347976 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:04.668125+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:34.291361+0000 osd.2 (osd.2) 76 : cluster [DBG] 10.13 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:34.423284+0000 osd.2 (osd.2) 77 : cluster [DBG] 10.13 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68804608 unmapped: 1409024 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=60) [2] r=0 lpr=60 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 24.168831 61 0.000190
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=60) [2] r=0 lpr=60 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 24.174819 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=60) [2] r=0 lpr=60 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 25.016765 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=60) [2] r=0 lpr=60 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 25.016964 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=60) [2] r=0 lpr=60 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81 pruub=15.830674171s) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 active pruub 164.427581787s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 23.171619 57 0.000427
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 23.174710 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 24.175102 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 24.176533 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.827572823s) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 active pruub 157.424896240s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81 pruub=15.830345154s) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 164.427581787s@ mbc={}] exit Reset 0.000396 1 0.000701
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81 pruub=15.830345154s) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 164.427581787s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81 pruub=15.830345154s) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 164.427581787s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81 pruub=15.830345154s) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 164.427581787s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81 pruub=15.830345154s) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 164.427581787s@ mbc={}] exit Start 0.000058 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.827487946s) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424896240s@ mbc={}] exit Reset 0.000110 1 0.000454
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.827487946s) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424896240s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.827487946s) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424896240s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81 pruub=15.830345154s) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 164.427581787s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.827487946s) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424896240s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.827487946s) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424896240s@ mbc={}] exit Start 0.000068 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 81 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81 pruub=8.827487946s) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 157.424896240s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 77)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:34.291361+0000 osd.2 (osd.2) 76 : cluster [DBG] 10.13 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:34.423284+0000 osd.2 (osd.2) 77 : cluster [DBG] 10.13 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:05.668470+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:35.258715+0000 osd.2 (osd.2) 78 : cluster [DBG] 10.3 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:35.297568+0000 osd.2 (osd.2) 79 : cluster [DBG] 10.3 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68812800 unmapped: 1400832 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.827971 3 0.000367
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.828133 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=81) [0] r=-1 lpr=81 pi=[60,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.827930 3 0.000298
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.828167 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=81) [0] r=-1 lpr=81 pi=[61,81)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000084 1 0.000125
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000008 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000383 1 0.000467
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000092 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002518 2 0.000236
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 82 handle_osd_map epochs [82,82], i have 82, src has [1,82]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000065 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000018 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003830 2 0.000041
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000072 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 82 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 79)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:35.258715+0000 osd.2 (osd.2) 78 : cluster [DBG] 10.3 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:35.297568+0000 osd.2 (osd.2) 79 : cluster [DBG] 10.3 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:06.669426+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:36.245660+0000 osd.2 (osd.2) 80 : cluster [DBG] 6.1 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:36.256069+0000 osd.2 (osd.2) 81 : cluster [DBG] 6.1 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 1343488 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 637492 data_alloc: 218103808 data_used: 4096
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 82 handle_osd_map epochs [83,83], i have 83, src has [1,83]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003182 3 0.000179
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.007196 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004316 3 0.000177
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.007016 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=60/61 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.002040 5 0.000678
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000079 1 0.000061
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=60/60 les/c/f=61/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.001232 5 0.001238
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.064307 1 0.000072
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.014249 2 0.000086
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.078522 1 0.000027
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.029252 1 0.000128
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.028369 2 0.000086
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 83 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 81)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:36.245660+0000 osd.2 (osd.2) 80 : cluster [DBG] 6.1 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:36.256069+0000 osd.2 (osd.2) 81 : cluster [DBG] 6.1 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:07.670334+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:37.218056+0000 osd.2 (osd.2) 82 : cluster [DBG] 11.17 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:37.228495+0000 osd.2 (osd.2) 83 : cluster [DBG] 11.17 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 1277952 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 84 handle_osd_map epochs [84,84], i have 84, src has [1,84]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.874780 1 0.000143
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.013432 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.020504 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.020651 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[60,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.932686 1 0.000094
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.013689 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.020915 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.020944 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=82) [0]/[2] async=[0] r=0 lpr=82 pi=[61,82)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84 pruub=14.988700867s) [0] async=[0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 40'1059 active pruub 166.435379028s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84 pruub=14.988006592s) [0] async=[0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 40'1059 active pruub 166.434722900s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84 pruub=14.988534927s) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.435379028s@ mbc={}] exit Reset 0.000231 1 0.000335
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84 pruub=14.988534927s) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.435379028s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84 pruub=14.988534927s) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.435379028s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84 pruub=14.988534927s) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.435379028s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84 pruub=14.988534927s) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.435379028s@ mbc={}] exit Start 0.000042 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84 pruub=14.988534927s) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.435379028s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84 pruub=14.987423897s) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.434722900s@ mbc={}] exit Reset 0.000620 1 0.000678
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84 pruub=14.987423897s) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.434722900s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84 pruub=14.987423897s) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.434722900s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84 pruub=14.987423897s) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.434722900s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84 pruub=14.987423897s) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.434722900s@ mbc={}] exit Start 0.000148 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 84 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84 pruub=14.987423897s) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.434722900s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 84 handle_osd_map epochs [84,84], i have 84, src has [1,84]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 84 heartbeat osd_stat(store_statfs(0x4fcf2d000/0x0/0x4ffc00000, data 0x8d206/0xfe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2bcf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.17 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.17 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 83)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:37.218056+0000 osd.2 (osd.2) 82 : cluster [DBG] 11.17 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:37.228495+0000 osd.2 (osd.2) 83 : cluster [DBG] 11.17 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:08.670525+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:38.207306+0000 osd.2 (osd.2) 84 : cluster [DBG] 12.17 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:38.218040+0000 osd.2 (osd.2) 85 : cluster [DBG] 12.17 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1245184 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 84 handle_osd_map epochs [84,85], i have 84, src has [1,85]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.011785 7 0.000166
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000075 1 0.000129
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.011416 7 0.000313
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000060 1 0.000076
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=-1 lpr=84 DELETING pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038588 2 0.000393
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.038760 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=82/83 n=6 ec=53/34 lis/c=82/60 les/c/f=83/61/0 sis=84) [0] r=-1 lpr=84 pi=[60,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.050696 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=-1 lpr=84 DELETING pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.053156 2 0.000230
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.053293 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 85 pg[10.1b( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=82/83 n=5 ec=53/34 lis/c=82/61 les/c/f=83/62/0 sis=84) [0] r=-1 lpr=84 pi=[61,84)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.064961 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 85)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:38.207306+0000 osd.2 (osd.2) 84 : cluster [DBG] 12.17 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:38.218040+0000 osd.2 (osd.2) 85 : cluster [DBG] 12.17 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:09.670700+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:39.233103+0000 osd.2 (osd.2) 86 : cluster [DBG] 11.16 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:39.243687+0000 osd.2 (osd.2) 87 : cluster [DBG] 11.16 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1245184 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 87)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:39.233103+0000 osd.2 (osd.2) 86 : cluster [DBG] 11.16 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:39.243687+0000 osd.2 (osd.2) 87 : cluster [DBG] 11.16 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:10.670898+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:40.226998+0000 osd.2 (osd.2) 88 : cluster [DBG] 4.19 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:40.237602+0000 osd.2 (osd.2) 89 : cluster [DBG] 4.19 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68968448 unmapped: 1245184 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 85 heartbeat osd_stat(store_statfs(0x4fcf27000/0x0/0x4ffc00000, data 0x910ca/0x102000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2bcf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 89)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:40.226998+0000 osd.2 (osd.2) 88 : cluster [DBG] 4.19 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:40.237602+0000 osd.2 (osd.2) 89 : cluster [DBG] 4.19 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:11.671095+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:41.246282+0000 osd.2 (osd.2) 90 : cluster [DBG] 11.13 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:41.256609+0000 osd.2 (osd.2) 91 : cluster [DBG] 11.13 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 1236992 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 637173 data_alloc: 218103808 data_used: 8192
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.072108269s of 10.152162552s, submitted: 64
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 91)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:41.246282+0000 osd.2 (osd.2) 90 : cluster [DBG] 11.13 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:41.256609+0000 osd.2 (osd.2) 91 : cluster [DBG] 11.13 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:12.671323+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:42.219879+0000 osd.2 (osd.2) 92 : cluster [DBG] 4.1c scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:42.230424+0000 osd.2 (osd.2) 93 : cluster [DBG] 4.1c scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 1236992 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 93)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:42.219879+0000 osd.2 (osd.2) 92 : cluster [DBG] 4.1c scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:42.230424+0000 osd.2 (osd.2) 93 : cluster [DBG] 4.1c scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:13.671512+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:43.203134+0000 osd.2 (osd.2) 94 : cluster [DBG] 8.15 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:43.213710+0000 osd.2 (osd.2) 95 : cluster [DBG] 8.15 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68870144 unmapped: 1343488 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 85 handle_osd_map epochs [86,87], i have 85, src has [1,87]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=68) [2] r=0 lpr=68 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 22.378818 51 0.000244
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=68) [2] r=0 lpr=68 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 22.382150 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=68) [2] r=0 lpr=68 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 23.383667 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=68) [2] r=0 lpr=68 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 23.383914 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=68) [2] r=0 lpr=68 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86 pruub=9.621111870s) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 active pruub 167.427932739s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86 pruub=9.621063232s) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 167.427932739s@ mbc={}] exit Reset 0.000089 2 0.000149
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86 pruub=9.621063232s) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 167.427932739s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86 pruub=9.621063232s) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 167.427932739s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86 pruub=9.621063232s) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 167.427932739s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86 pruub=9.621063232s) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 167.427932739s@ mbc={}] exit Start 0.000007 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86 pruub=9.621063232s) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 167.427932739s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 23.383866 54 0.000305
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 23.386753 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 24.141847 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 24.141882 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 86 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86 pruub=8.616195679s) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 active pruub 166.424423218s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86 pruub=8.616158485s) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.424423218s@ mbc={}] exit Reset 0.000079 2 0.000135
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86 pruub=8.616158485s) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.424423218s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86 pruub=8.616158485s) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.424423218s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86 pruub=8.616158485s) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.424423218s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86 pruub=8.616158485s) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.424423218s@ mbc={}] exit Start 0.000006 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 87 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86 pruub=8.616158485s) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 166.424423218s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 95)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:43.203134+0000 osd.2 (osd.2) 94 : cluster [DBG] 8.15 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:43.213710+0000 osd.2 (osd.2) 95 : cluster [DBG] 8.15 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 87 handle_osd_map epochs [86,87], i have 87, src has [1,87]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:14.671712+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:44.180815+0000 osd.2 (osd.2) 96 : cluster [DBG] 4.1f scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:44.191377+0000 osd.2 (osd.2) 97 : cluster [DBG] 4.1f scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68935680 unmapped: 1277952 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.015215 3 0.000071
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.015252 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=86) [0] r=-1 lpr=86 pi=[67,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000087 1 0.000122
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000006 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000057 1 0.000060
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000045 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.017844 3 0.000043
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.017983 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=86) [0] r=-1 lpr=86 pi=[68,86)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 97)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:44.180815+0000 osd.2 (osd.2) 96 : cluster [DBG] 4.1f scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:44.191377+0000 osd.2 (osd.2) 97 : cluster [DBG] 4.1f scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000917 1 0.001125
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.001249 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000168 1 0.001426
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000040 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000042 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 88 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:15.671870+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:45.205706+0000 osd.2 (osd.2) 98 : cluster [DBG] 9.1d scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:45.216239+0000 osd.2 (osd.2) 99 : cluster [DBG] 9.1d scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68952064 unmapped: 1261568 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 88 heartbeat osd_stat(store_statfs(0x4fcb0f000/0x0/0x4ffc00000, data 0x97642/0x10b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 89 handle_osd_map epochs [89,89], i have 89, src has [1,89]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991428 4 0.003359
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.994948 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999107 4 0.000088
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999257 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=67/68 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.001962 5 0.000609
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000175 1 0.000201
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=67/67 les/c/f=68/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.002492 5 0.000281
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.001214 1 0.000034
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 99)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:45.205706+0000 osd.2 (osd.2) 98 : cluster [DBG] 9.1d scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:45.216239+0000 osd.2 (osd.2) 99 : cluster [DBG] 9.1d scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.049589 2 0.000086
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.049921 1 0.000031
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.036399 1 0.000084
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035438 2 0.000113
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 89 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:16.672050+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:46.214322+0000 osd.2 (osd.2) 100 : cluster [DBG] 8.1c scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:46.225012+0000 osd.2 (osd.2) 101 : cluster [DBG] 8.1c scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 1236992 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 655345 data_alloc: 218103808 data_used: 12288
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.18 deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.18 deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 89 handle_osd_map epochs [90,90], i have 90, src has [1,90]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.889574 1 0.000106
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.014287 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.013619 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.013806 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[67,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.961952 1 0.000169
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.015326 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.010304 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.011626 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=88) [0]/[2] async=[0] r=0 lpr=88 pi=[68,88)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=14.987930298s) [0] async=[0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 40'1059 active pruub 175.825469971s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.986706734s) [0] async=[0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 40'1059 active pruub 175.824310303s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.986626625s) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.824310303s@ mbc={}] exit Reset 0.000138 1 0.000353
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.986626625s) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.824310303s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.986626625s) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.824310303s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.986626625s) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.824310303s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.986626625s) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.824310303s@ mbc={}] exit Start 0.000007 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90 pruub=14.986626625s) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.824310303s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=14.987289429s) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.825469971s@ mbc={}] exit Reset 0.000733 1 0.001178
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=14.987289429s) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.825469971s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=14.987289429s) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.825469971s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=14.987289429s) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.825469971s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=14.987289429s) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.825469971s@ mbc={}] exit Start 0.000113 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 90 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90 pruub=14.987289429s) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 175.825469971s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 101)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:46.214322+0000 osd.2 (osd.2) 100 : cluster [DBG] 8.1c scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:46.225012+0000 osd.2 (osd.2) 101 : cluster [DBG] 8.1c scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:17.672203+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:47.251696+0000 osd.2 (osd.2) 102 : cluster [DBG] 12.18 deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:47.262185+0000 osd.2 (osd.2) 103 : cluster [DBG] 12.18 deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 68976640 unmapped: 1236992 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 90 handle_osd_map epochs [90,91], i have 90, src has [1,91]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 103)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:47.251696+0000 osd.2 (osd.2) 102 : cluster [DBG] 12.18 deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:47.262185+0000 osd.2 (osd.2) 103 : cluster [DBG] 12.18 deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.010368 7 0.000098
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.009759 7 0.000259
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000060 1 0.000075
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000147 1 0.000160
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=-1 lpr=90 DELETING pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.046093 2 0.000858
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.046197 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.c( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=88/89 n=6 ec=53/34 lis/c=88/67 les/c/f=89/68/0 sis=90) [0] r=-1 lpr=90 pi=[67,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.056147 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=-1 lpr=90 DELETING pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.097928 2 0.000127
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.098135 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 91 pg[10.1c( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=88/89 n=7 ec=53/34 lis/c=88/68 les/c/f=89/69/0 sis=90) [0] r=-1 lpr=90 pi=[68,90)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.108580 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:18.672363+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:48.233799+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.11 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:48.244488+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.11 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69025792 unmapped: 1187840 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1a scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1a scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 91 heartbeat osd_stat(store_statfs(0x4fcb06000/0x0/0x4ffc00000, data 0x9d582/0x113000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 105)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:48.233799+0000 osd.2 (osd.2) 104 : cluster [DBG] 7.11 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:48.244488+0000 osd.2 (osd.2) 105 : cluster [DBG] 7.11 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:19.672535+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:49.231906+0000 osd.2 (osd.2) 106 : cluster [DBG] 12.1a scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:49.242438+0000 osd.2 (osd.2) 107 : cluster [DBG] 12.1a scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69025792 unmapped: 1187840 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.11 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.11 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 107)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:49.231906+0000 osd.2 (osd.2) 106 : cluster [DBG] 12.1a scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:49.242438+0000 osd.2 (osd.2) 107 : cluster [DBG] 12.1a scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:20.672698+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:50.251692+0000 osd.2 (osd.2) 108 : cluster [DBG] 12.11 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:50.262250+0000 osd.2 (osd.2) 109 : cluster [DBG] 12.11 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69099520 unmapped: 1114112 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 109)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:50.251692+0000 osd.2 (osd.2) 108 : cluster [DBG] 12.11 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:50.262250+0000 osd.2 (osd.2) 109 : cluster [DBG] 12.11 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:21.672900+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:51.231296+0000 osd.2 (osd.2) 110 : cluster [DBG] 9.5 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:51.241615+0000 osd.2 (osd.2) 111 : cluster [DBG] 9.5 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1105920 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 644173 data_alloc: 218103808 data_used: 12288
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 91 heartbeat osd_stat(store_statfs(0x4fcb09000/0x0/0x4ffc00000, data 0x9d582/0x113000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.228099823s of 10.284918785s, submitted: 46
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 111)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:51.231296+0000 osd.2 (osd.2) 110 : cluster [DBG] 9.5 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:51.241615+0000 osd.2 (osd.2) 111 : cluster [DBG] 9.5 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 92 ms_handle_reset con 0x55bdd4b71800 session 0x55bdd6f7cd20
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:22.673100+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:52.204149+0000 osd.2 (osd.2) 112 : cluster [DBG] 4.8 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:52.214855+0000 osd.2 (osd.2) 113 : cluster [DBG] 4.8 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1105920 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 113)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:52.204149+0000 osd.2 (osd.2) 112 : cluster [DBG] 4.8 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:52.214855+0000 osd.2 (osd.2) 113 : cluster [DBG] 4.8 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:23.673286+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:53.214884+0000 osd.2 (osd.2) 114 : cluster [DBG] 7.14 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:53.225360+0000 osd.2 (osd.2) 115 : cluster [DBG] 7.14 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 92 handle_osd_map epochs [93,93], i have 92, src has [1,93]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69115904 unmapped: 1097728 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 115)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:53.214884+0000 osd.2 (osd.2) 114 : cluster [DBG] 7.14 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:53.225360+0000 osd.2 (osd.2) 115 : cluster [DBG] 7.14 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f(unlocked)] enter Initial
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=0 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000053 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=0 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000013 1 0.000028
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000109 1 0.000042
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f(unlocked)] enter Initial
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=0 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000073 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=0 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000032
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000074 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000069 1 0.000224
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000053 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000083 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000649 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000240 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 94 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:24.673448+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:54.221741+0000 osd.2 (osd.2) 116 : cluster [DBG] 7.1f scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:54.232307+0000 osd.2 (osd.2) 117 : cluster [DBG] 7.1f scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69107712 unmapped: 1105920 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.3 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.3 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 117)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:54.221741+0000 osd.2 (osd.2) 116 : cluster [DBG] 7.1f scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:54.232307+0000 osd.2 (osd.2) 117 : cluster [DBG] 7.1f scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 94 handle_osd_map epochs [94,95], i have 95, src has [1,95]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.007822 2 0.000283
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.008127 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.008274 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000091 1 0.000140
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.1f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.008689 2 0.000570
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.009923 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.009956 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=94) [2] r=0 lpr=94 pi=[75,94)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000098 1 0.000700
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000045 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 95 pg[10.f( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 95 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 95 heartbeat osd_stat(store_statfs(0x4fcafd000/0x0/0x4ffc00000, data 0xa3a32/0x11c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:25.673589+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:55.221999+0000 osd.2 (osd.2) 118 : cluster [DBG] 12.3 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:55.232569+0000 osd.2 (osd.2) 119 : cluster [DBG] 12.3 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69156864 unmapped: 1056768 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.b scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.b scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 119)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:55.221999+0000 osd.2 (osd.2) 118 : cluster [DBG] 12.3 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:55.232569+0000 osd.2 (osd.2) 119 : cluster [DBG] 12.3 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.1f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.006523 6 0.000045
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.1f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.1f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.005585 6 0.000118
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.f( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=75/75 les/c/f=76/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003200 3 0.000093
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000068 1 0.000060
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.1f( v 40'1059 lc 40'495 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035719 1 0.000069
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.f( v 40'1059 lc 40'148 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.038703 3 0.000302
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.f( v 40'1059 lc 40'148 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.f( v 40'1059 lc 40'148 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000069 1 0.000060
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.f( v 40'1059 lc 40'148 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052673 1 0.000028
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 96 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:26.673721+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:56.212695+0000 osd.2 (osd.2) 120 : cluster [DBG] 8.b scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:56.223259+0000 osd.2 (osd.2) 121 : cluster [DBG] 8.b scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69124096 unmapped: 1089536 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 687609 data_alloc: 218103808 data_used: 12288
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 121)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:56.212695+0000 osd.2 (osd.2) 120 : cluster [DBG] 8.b scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:56.223259+0000 osd.2 (osd.2) 121 : cluster [DBG] 8.b scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.921376 1 0.000024
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.012909 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.018605 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.974420 1 0.000019
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.013499 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.020054 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=95) [2]/[0] r=-1 lpr=95 pi=[75,95)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000086 1 0.000143
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 1 0.000040
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000195 1 0.000260
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000052 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000059 1 0.000265
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=32
Oct 09 10:05:05 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=41
Oct 09 10:05:05 compute-2 ceph-osd[11347]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=41
Oct 09 10:05:05 compute-2 ceph-osd[11347]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=32
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001480 3 0.000251
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001782 3 0.000100
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000028 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000020 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 97 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:27.673872+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:57.171566+0000 osd.2 (osd.2) 122 : cluster [DBG] 8.11 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:57.182238+0000 osd.2 (osd.2) 123 : cluster [DBG] 8.11 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69140480 unmapped: 1073152 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.13 deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.13 deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 123)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:57.171566+0000 osd.2 (osd.2) 122 : cluster [DBG] 8.11 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:57.182238+0000 osd.2 (osd.2) 123 : cluster [DBG] 8.11 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 97 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002478 2 0.000150
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004141 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002630 2 0.000149
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.004571 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=95/96 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=6 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=6 ec=53/34 lis/c=97/75 les/c/f=98/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001269 4 0.000158
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=6 ec=53/34 lis/c=97/75 les/c/f=98/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=6 ec=53/34 lis/c=97/75 les/c/f=98/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=6 ec=53/34 lis/c=97/75 les/c/f=98/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=95/75 les/c/f=96/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/75 les/c/f=98/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001416 4 0.000108
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/75 les/c/f=98/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/75 les/c/f=98/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 98 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/75 les/c/f=98/76/0 sis=97) [2] r=0 lpr=97 pi=[75,97)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:28.674018+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:58.193445+0000 osd.2 (osd.2) 124 : cluster [DBG] 9.13 deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:58.203964+0000 osd.2 (osd.2) 125 : cluster [DBG] 9.13 deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69173248 unmapped: 1040384 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 125)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:58.193445+0000 osd.2 (osd.2) 124 : cluster [DBG] 9.13 deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:58.203964+0000 osd.2 (osd.2) 125 : cluster [DBG] 9.13 deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:29.674184+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:59.224676+0000 osd.2 (osd.2) 126 : cluster [DBG] 8.5 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:38:59.235265+0000 osd.2 (osd.2) 127 : cluster [DBG] 8.5 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1032192 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.f scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.f scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 127)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:59.224676+0000 osd.2 (osd.2) 126 : cluster [DBG] 8.5 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:38:59.235265+0000 osd.2 (osd.2) 127 : cluster [DBG] 8.5 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:30.674314+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:00.200086+0000 osd.2 (osd.2) 128 : cluster [DBG] 8.f scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:00.210698+0000 osd.2 (osd.2) 129 : cluster [DBG] 8.f scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 98 heartbeat osd_stat(store_statfs(0x4fcaf1000/0x0/0x4ffc00000, data 0xaba5e/0x12a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69181440 unmapped: 1032192 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 129)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:00.200086+0000 osd.2 (osd.2) 128 : cluster [DBG] 8.f scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:00.210698+0000 osd.2 (osd.2) 129 : cluster [DBG] 8.f scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:31.674487+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:01.223274+0000 osd.2 (osd.2) 130 : cluster [DBG] 4.9 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:01.233774+0000 osd.2 (osd.2) 131 : cluster [DBG] 4.9 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 98 heartbeat osd_stat(store_statfs(0x4fcaf1000/0x0/0x4ffc00000, data 0xaba5e/0x12a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69197824 unmapped: 1015808 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 697254 data_alloc: 218103808 data_used: 12288
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.022220612s of 10.090888023s, submitted: 62
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10(unlocked)] enter Initial
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=0 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000075 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=0 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000030
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000041 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 131)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:01.223274+0000 osd.2 (osd.2) 130 : cluster [DBG] 4.9 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:01.233774+0000 osd.2 (osd.2) 131 : cluster [DBG] 4.9 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000195 1 0.000079
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000031 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000243 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 99 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:32.674628+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:02.257254+0000 osd.2 (osd.2) 132 : cluster [DBG] 11.19 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:02.267800+0000 osd.2 (osd.2) 133 : cluster [DBG] 11.19 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69197824 unmapped: 1015808 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1e scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1e scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 133)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:02.257254+0000 osd.2 (osd.2) 132 : cluster [DBG] 11.19 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:02.267800+0000 osd.2 (osd.2) 133 : cluster [DBG] 11.19 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.018166 2 0.000059
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.018439 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.018601 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=99) [2] r=0 lpr=99 pi=[53,99)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 100 handle_osd_map epochs [99,100], i have 100, src has [1,100]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000056 1 0.000200
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 100 pg[10.10( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:33.674766+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:03.274557+0000 osd.2 (osd.2) 134 : cluster [DBG] 12.1e scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:03.285080+0000 osd.2 (osd.2) 135 : cluster [DBG] 12.1e scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69238784 unmapped: 974848 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 135)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:03.274557+0000 osd.2 (osd.2) 134 : cluster [DBG] 12.1e scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:03.285080+0000 osd.2 (osd.2) 135 : cluster [DBG] 12.1e scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:34.674891+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:04.302129+0000 osd.2 (osd.2) 136 : cluster [DBG] 11.8 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:04.312755+0000 osd.2 (osd.2) 137 : cluster [DBG] 11.8 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 69279744 unmapped: 933888 heap: 70213632 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 101 pg[10.10( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=2 mbc={}] exit Started/Stray 1.624711 5 0.000734
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 101 pg[10.10( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 101 pg[10.10( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=53/53 les/c/f=55/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 101 pg[10.10( v 40'1059 lc 40'340 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002739 4 0.000103
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 101 pg[10.10( v 40'1059 lc 40'340 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 101 pg[10.10( v 40'1059 lc 40'340 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000077 1 0.000103
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 101 pg[10.10( v 40'1059 lc 40'340 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.014579 1 0.000032
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 101 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.381344 1 0.000034
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.398855 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.023599 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=100) [2]/[0] r=-1 lpr=100 pi=[53,100)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000072 1 0.000122
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000078 1 0.000080
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 137)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:04.302129+0000 osd.2 (osd.2) 136 : cluster [DBG] 11.8 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:04.312755+0000 osd.2 (osd.2) 137 : cluster [DBG] 11.8 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Oct 09 10:05:05 compute-2 ceph-osd[11347]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002114 3 0.000068
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000031 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 102 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:35.675022+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:05.307764+0000 osd.2 (osd.2) 138 : cluster [DBG] 9.16 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:05.318297+0000 osd.2 (osd.2) 139 : cluster [DBG] 9.16 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 909312 heap: 71262208 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.13 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.13 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 102 handle_osd_map epochs [101,103], i have 102, src has [1,103]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 102 handle_osd_map epochs [102,103], i have 103, src has [1,103]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12(unlocked)] enter Initial
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=0 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000123 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=0 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000241
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000190 1 0.000051
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000037 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000240 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003786 2 0.000144
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.006069 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=100/101 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=102/103 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 139)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:05.307764+0000 osd.2 (osd.2) 138 : cluster [DBG] 9.16 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:05.318297+0000 osd.2 (osd.2) 139 : cluster [DBG] 9.16 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=102/103 n=2 ec=53/34 lis/c=100/53 les/c/f=101/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=102/103 n=2 ec=53/34 lis/c=102/53 les/c/f=103/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002140 3 0.000234
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=102/103 n=2 ec=53/34 lis/c=102/53 les/c/f=103/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=102/103 n=2 ec=53/34 lis/c=102/53 les/c/f=103/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 103 pg[10.10( v 40'1059 (0'0,40'1059] local-lis/les=102/103 n=2 ec=53/34 lis/c=102/53 les/c/f=103/55/0 sis=102) [2] r=0 lpr=102 pi=[53,102)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:36.675143+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:06.336894+0000 osd.2 (osd.2) 140 : cluster [DBG] 12.13 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:06.347427+0000 osd.2 (osd.2) 141 : cluster [DBG] 12.13 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 103 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 70361088 unmapped: 901120 heap: 71262208 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 725737 data_alloc: 218103808 data_used: 12288
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1 deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.1 deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 103 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.999621 2 0.000061
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.999896 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.999919 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=103) [2] r=0 lpr=103 pi=[62,103)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000070 1 0.000118
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 104 pg[10.12( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 141)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:06.336894+0000 osd.2 (osd.2) 140 : cluster [DBG] 12.13 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:06.347427+0000 osd.2 (osd.2) 141 : cluster [DBG] 12.13 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:37.675270+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:07.336440+0000 osd.2 (osd.2) 142 : cluster [DBG] 4.1 deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:07.347046+0000 osd.2 (osd.2) 143 : cluster [DBG] 4.1 deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fcae0000/0x0/0x4ffc00000, data 0xb5cd6/0x139000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 70434816 unmapped: 827392 heap: 71262208 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 143)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:07.336440+0000 osd.2 (osd.2) 142 : cluster [DBG] 4.1 deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:07.347046+0000 osd.2 (osd.2) 143 : cluster [DBG] 4.1 deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:38.675416+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:08.309803+0000 osd.2 (osd.2) 144 : cluster [DBG] 7.5 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:08.320378+0000 osd.2 (osd.2) 145 : cluster [DBG] 7.5 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.12( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.471482 5 0.000042
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.12( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.12( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=62/62 les/c/f=63/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 57.050686 132 0.002046
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 57.054232 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 58.057858 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 58.057936 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=61) [2] r=0 lpr=61 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105 pruub=14.949655533s) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 active pruub 197.427154541s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105 pruub=14.949279785s) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 197.427154541s@ mbc={}] exit Reset 0.000550 1 0.000960
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105 pruub=14.949279785s) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 197.427154541s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105 pruub=14.949279785s) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 197.427154541s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105 pruub=14.949279785s) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 197.427154541s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105 pruub=14.949279785s) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 197.427154541s@ mbc={}] exit Start 0.000039 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105 pruub=14.949279785s) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 197.427154541s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.12( v 40'1059 lc 40'479 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003429 4 0.000296
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.12( v 40'1059 lc 40'479 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.12( v 40'1059 lc 40'479 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000034 1 0.000052
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.12( v 40'1059 lc 40'479 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 745472 heap: 71262208 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.028621 1 0.000022
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 105 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.6 deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.6 deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 145)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:08.309803+0000 osd.2 (osd.2) 144 : cluster [DBG] 7.5 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:08.320378+0000 osd.2 (osd.2) 145 : cluster [DBG] 7.5 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.555556 3 0.000330
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.556372 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=105) [1] r=-1 lpr=105 pi=[61,105)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.525361 1 0.000029
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.558147 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.029727 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=104) [2]/[1] r=-1 lpr=104 pi=[62,104)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000076 1 0.000727
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000188 1 0.000840
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000049 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000041 1 0.000145
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:39.675531+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:09.308172+0000 osd.2 (osd.2) 146 : cluster [DBG] 4.6 deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:09.318988+0000 osd.2 (osd.2) 147 : cluster [DBG] 4.6 deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000318 1 0.000304
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000270 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000279 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=26
Oct 09 10:05:05 compute-2 ceph-osd[11347]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=26
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001665 3 0.000059
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 106 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 70483968 unmapped: 778240 heap: 71262208 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.a scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.a scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996741 2 0.000077
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.998800 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=104/105 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 107 handle_osd_map epochs [106,107], i have 107, src has [1,107]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997452 4 0.001677
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.999353 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=61/62 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=68) [2] r=0 lpr=68 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 48.606948 116 0.000474
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=68) [2] r=0 lpr=68 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 48.609836 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=68) [2] r=0 lpr=68 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 49.612684 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=68) [2] r=0 lpr=68 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 49.612712 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=68) [2] r=0 lpr=68 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107 pruub=15.393718719s) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 active pruub 199.428039551s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107 pruub=15.393673897s) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 199.428039551s@ mbc={}] exit Reset 0.000073 1 0.000128
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107 pruub=15.393673897s) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 199.428039551s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107 pruub=15.393673897s) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 199.428039551s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107 pruub=15.393673897s) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 199.428039551s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107 pruub=15.393673897s) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 199.428039551s@ mbc={}] exit Start 0.000006 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107 pruub=15.393673897s) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 199.428039551s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:40.676037+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 4 last_log 149 sent 147 num 4 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:10.268114+0000 osd.2 (osd.2) 148 : cluster [DBG] 11.a scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:10.278655+0000 osd.2 (osd.2) 149 : cluster [DBG] 11.a scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 107 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=4 ec=53/34 lis/c=104/62 les/c/f=105/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=4 ec=53/34 lis/c=106/62 les/c/f=107/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003316 3 0.000297
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=4 ec=53/34 lis/c=106/62 les/c/f=107/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=4 ec=53/34 lis/c=106/62 les/c/f=107/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.12( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=4 ec=53/34 lis/c=106/62 les/c/f=107/63/0 sis=106) [2] r=0 lpr=106 pi=[62,106)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 147)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:09.308172+0000 osd.2 (osd.2) 146 : cluster [DBG] 4.6 deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:09.318988+0000 osd.2 (osd.2) 147 : cluster [DBG] 4.6 deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=61/61 les/c/f=62/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.005944 5 0.000570
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.001095 1 0.000101
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000348 1 0.000037
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.064291 2 0.000066
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 107 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.328914 1 0.000184
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 0.400946 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 1.400347 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 1.400446 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=106) [1]/[2] async=[1] r=0 lpr=106 pi=[61,106)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108 pruub=15.604859352s) [1] async=[1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 40'1059 active pruub 200.039978027s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108 pruub=15.604738235s) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 200.039978027s@ mbc={}] exit Reset 0.000189 1 0.000275
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108 pruub=15.604738235s) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 200.039978027s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108 pruub=15.604738235s) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 200.039978027s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108 pruub=15.604738235s) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 200.039978027s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108 pruub=15.604738235s) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 200.039978027s@ mbc={}] exit Start 0.000055 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108 pruub=15.604738235s) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 200.039978027s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.401337 3 0.000037
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.401403 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=107) [1] r=-1 lpr=107 pi=[68,107)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000137 1 0.000207
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 1 0.000048
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000060 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 108 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 745472 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.b scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.b scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:41.676291+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 4 last_log 151 sent 149 num 4 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:11.235914+0000 osd.2 (osd.2) 150 : cluster [DBG] 9.b scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:11.246384+0000 osd.2 (osd.2) 151 : cluster [DBG] 9.b scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 149)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:10.268114+0000 osd.2 (osd.2) 148 : cluster [DBG] 11.a scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:10.278655+0000 osd.2 (osd.2) 149 : cluster [DBG] 11.a scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 151)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:11.235914+0000 osd.2 (osd.2) 150 : cluster [DBG] 9.b scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:11.246384+0000 osd.2 (osd.2) 151 : cluster [DBG] 9.b scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003088 4 0.000157
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.003882 6 0.000336
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.003515 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=68/69 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 109 handle_osd_map epochs [109,109], i have 109, src has [1,109]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001546 2 0.000256
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=68/68 les/c/f=69/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.002438 5 0.001319
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000096 1 0.000076
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000389 1 0.000105
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=-1 lpr=108 DELETING pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.050869 2 0.000180
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.052488 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.13( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=106/107 n=5 ec=53/34 lis/c=106/61 les/c/f=107/62/0 sis=108) [1] r=-1 lpr=108 pi=[61,108)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.056661 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 712704 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 744746 data_alloc: 218103808 data_used: 12288
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.079876 2 0.000055
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 109 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:42.676525+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:12.227689+0000 osd.2 (osd.2) 152 : cluster [DBG] 9.8 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:12.238321+0000 osd.2 (osd.2) 153 : cluster [DBG] 9.8 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 153)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:12.227689+0000 osd.2 (osd.2) 152 : cluster [DBG] 9.8 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:12.238321+0000 osd.2 (osd.2) 153 : cluster [DBG] 9.8 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 109 handle_osd_map epochs [109,110], i have 109, src has [1,110]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.385285378s of 10.488866806s, submitted: 116
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 109 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.923657 1 0.000078
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 1.006782 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 2.010544 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 2.010572 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=108) [1]/[2] async=[1] r=0 lpr=108 pi=[68,108)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110 pruub=14.995595932s) [1] async=[1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 40'1059 active pruub 201.442153931s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110 pruub=14.995530128s) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 201.442153931s@ mbc={}] exit Reset 0.000101 1 0.000148
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110 pruub=14.995530128s) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 201.442153931s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110 pruub=14.995530128s) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 201.442153931s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110 pruub=14.995530128s) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 201.442153931s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110 pruub=14.995530128s) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 201.442153931s@ mbc={}] exit Start 0.000016 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 110 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110 pruub=14.995530128s) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 201.442153931s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 704512 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:43.676703+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:13.188204+0000 osd.2 (osd.2) 154 : cluster [DBG] 9.17 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:13.198767+0000 osd.2 (osd.2) 155 : cluster [DBG] 9.17 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 110 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xc3e18/0x14e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 155)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:13.188204+0000 osd.2 (osd.2) 154 : cluster [DBG] 9.17 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:13.198767+0000 osd.2 (osd.2) 155 : cluster [DBG] 9.17 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 638976 heap: 72310784 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _renew_subs
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.9 deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.139639 6 0.000120
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000658 1 0.000064
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.9 deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=-1 lpr=110 DELETING pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.050865 3 0.000165
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.051595 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 111 pg[10.14( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=108/109 n=5 ec=53/34 lis/c=108/68 les/c/f=109/69/0 sis=110) [1] r=-1 lpr=110 pi=[68,110)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.191287 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:44.676968+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:14.223186+0000 osd.2 (osd.2) 156 : cluster [DBG] 8.9 deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:14.241174+0000 osd.2 (osd.2) 157 : cluster [DBG] 8.9 deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 111 heartbeat osd_stat(store_statfs(0x4fcace000/0x0/0x4ffc00000, data 0xc3e18/0x14e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 157)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:14.223186+0000 osd.2 (osd.2) 156 : cluster [DBG] 8.9 deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:14.241174+0000 osd.2 (osd.2) 157 : cluster [DBG] 8.9 deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 1662976 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.4 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.4 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:45.677230+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:15.249797+0000 osd.2 (osd.2) 158 : cluster [DBG] 12.4 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:15.260378+0000 osd.2 (osd.2) 159 : cluster [DBG] 12.4 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 159)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:15.249797+0000 osd.2 (osd.2) 158 : cluster [DBG] 12.4 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:15.260378+0000 osd.2 (osd.2) 159 : cluster [DBG] 12.4 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 1662976 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 111 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xc5dba/0x151000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:46.677385+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:16.245871+0000 osd.2 (osd.2) 160 : cluster [DBG] 9.7 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:16.256329+0000 osd.2 (osd.2) 161 : cluster [DBG] 9.7 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 111 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 161)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:16.245871+0000 osd.2 (osd.2) 160 : cluster [DBG] 9.7 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:16.256329+0000 osd.2 (osd.2) 161 : cluster [DBG] 9.7 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 1777664 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 746862 data_alloc: 218103808 data_used: 16384
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:47.677784+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:17.222098+0000 osd.2 (osd.2) 162 : cluster [DBG] 4.15 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:17.232760+0000 osd.2 (osd.2) 163 : cluster [DBG] 4.15 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 163)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:17.222098+0000 osd.2 (osd.2) 162 : cluster [DBG] 4.15 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:17.232760+0000 osd.2 (osd.2) 163 : cluster [DBG] 4.15 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71581696 unmapped: 1777664 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.2 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.2 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:48.677970+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:18.242041+0000 osd.2 (osd.2) 164 : cluster [DBG] 12.2 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:18.252645+0000 osd.2 (osd.2) 165 : cluster [DBG] 12.2 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 165)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:18.242041+0000 osd.2 (osd.2) 164 : cluster [DBG] 12.2 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:18.252645+0000 osd.2 (osd.2) 165 : cluster [DBG] 12.2 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 1753088 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:49.678167+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:19.218833+0000 osd.2 (osd.2) 166 : cluster [DBG] 4.2 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:19.229496+0000 osd.2 (osd.2) 167 : cluster [DBG] 4.2 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 167)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:19.218833+0000 osd.2 (osd.2) 166 : cluster [DBG] 4.2 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:19.229496+0000 osd.2 (osd.2) 167 : cluster [DBG] 4.2 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 1744896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 113 heartbeat osd_stat(store_statfs(0x4fcac7000/0x0/0x4ffc00000, data 0xc7ea6/0x154000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:50.678441+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:20.254235+0000 osd.2 (osd.2) 168 : cluster [DBG] 9.18 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:20.264753+0000 osd.2 (osd.2) 169 : cluster [DBG] 9.18 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 169)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:20.254235+0000 osd.2 (osd.2) 168 : cluster [DBG] 9.18 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:20.264753+0000 osd.2 (osd.2) 169 : cluster [DBG] 9.18 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 1736704 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1d scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.1d scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:51.678708+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:21.250188+0000 osd.2 (osd.2) 170 : cluster [DBG] 12.1d scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:21.260796+0000 osd.2 (osd.2) 171 : cluster [DBG] 12.1d scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 171)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:21.250188+0000 osd.2 (osd.2) 170 : cluster [DBG] 12.1d scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:21.260796+0000 osd.2 (osd.2) 171 : cluster [DBG] 12.1d scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 1720320 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 759218 data_alloc: 218103808 data_used: 16384
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.c scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.c scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcac0000/0x0/0x4ffc00000, data 0xcc07e/0x15a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:52.678993+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:22.276282+0000 osd.2 (osd.2) 172 : cluster [DBG] 8.c scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:22.286899+0000 osd.2 (osd.2) 173 : cluster [DBG] 8.c scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 173)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:22.276282+0000 osd.2 (osd.2) 172 : cluster [DBG] 8.c scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:22.286899+0000 osd.2 (osd.2) 173 : cluster [DBG] 8.c scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71688192 unmapped: 1671168 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.010528564s of 10.088379860s, submitted: 33
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:53.679264+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:23.291169+0000 osd.2 (osd.2) 174 : cluster [DBG] 8.2 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:23.301754+0000 osd.2 (osd.2) 175 : cluster [DBG] 8.2 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 1654784 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 175)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:23.291169+0000 osd.2 (osd.2) 174 : cluster [DBG] 8.2 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:23.301754+0000 osd.2 (osd.2) 175 : cluster [DBG] 8.2 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:54.679490+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:24.296357+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.3 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:24.306574+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.3 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 1654784 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 177)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:24.296357+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.3 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:24.306574+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.3 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:55.679716+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:25.320576+0000 osd.2 (osd.2) 178 : cluster [DBG] 9.9 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:25.331086+0000 osd.2 (osd.2) 179 : cluster [DBG] 9.9 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 1638400 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 179)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:25.320576+0000 osd.2 (osd.2) 178 : cluster [DBG] 9.9 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:25.331086+0000 osd.2 (osd.2) 179 : cluster [DBG] 9.9 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:56.680029+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:26.342384+0000 osd.2 (osd.2) 180 : cluster [DBG] 8.3 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:26.352870+0000 osd.2 (osd.2) 181 : cluster [DBG] 8.3 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xd0256/0x160000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 116 handle_osd_map epochs [117,118], i have 116, src has [1,118]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 116 handle_osd_map epochs [117,118], i have 118, src has [1,118]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 1589248 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 777465 data_alloc: 218103808 data_used: 24576
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 181)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:26.342384+0000 osd.2 (osd.2) 180 : cluster [DBG] 8.3 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:26.352870+0000 osd.2 (osd.2) 181 : cluster [DBG] 8.3 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.7 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.7 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:57.680211+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:27.343596+0000 osd.2 (osd.2) 182 : cluster [DBG] 12.7 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:27.354093+0000 osd.2 (osd.2) 183 : cluster [DBG] 12.7 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 1556480 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 183)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:27.343596+0000 osd.2 (osd.2) 182 : cluster [DBG] 12.7 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:27.354093+0000 osd.2 (osd.2) 183 : cluster [DBG] 12.7 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:58.680417+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:28.323272+0000 osd.2 (osd.2) 184 : cluster [DBG] 4.3 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:28.333794+0000 osd.2 (osd.2) 185 : cluster [DBG] 4.3 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 1548288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 185)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:28.323272+0000 osd.2 (osd.2) 184 : cluster [DBG] 4.3 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:28.333794+0000 osd.2 (osd.2) 185 : cluster [DBG] 4.3 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.d deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.d deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:59.680590+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:29.303396+0000 osd.2 (osd.2) 186 : cluster [DBG] 8.d deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:29.313994+0000 osd.2 (osd.2) 187 : cluster [DBG] 8.d deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 1548288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 187)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:29.303396+0000 osd.2 (osd.2) 186 : cluster [DBG] 8.d deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:29.313994+0000 osd.2 (osd.2) 187 : cluster [DBG] 8.d deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _renew_subs
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.1f deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.1f deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:00.680759+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:30.334456+0000 osd.2 (osd.2) 188 : cluster [DBG] 8.1f deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:30.348559+0000 osd.2 (osd.2) 189 : cluster [DBG] 8.1f deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 1540096 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 189)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:30.334456+0000 osd.2 (osd.2) 188 : cluster [DBG] 8.1f deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:30.348559+0000 osd.2 (osd.2) 189 : cluster [DBG] 8.1f deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:01.680884+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:31.291998+0000 osd.2 (osd.2) 190 : cluster [DBG] 8.16 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:31.302594+0000 osd.2 (osd.2) 191 : cluster [DBG] 8.16 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 1540096 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 789067 data_alloc: 218103808 data_used: 40960
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 191)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:31.291998+0000 osd.2 (osd.2) 190 : cluster [DBG] 8.16 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:31.302594+0000 osd.2 (osd.2) 191 : cluster [DBG] 8.16 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:02.681054+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:32.256933+0000 osd.2 (osd.2) 192 : cluster [DBG] 11.3 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:32.271062+0000 osd.2 (osd.2) 193 : cluster [DBG] 11.3 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab0000/0x0/0x4ffc00000, data 0xd813e/0x16c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 1531904 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.005482674s of 10.049218178s, submitted: 44
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 193)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:32.256933+0000 osd.2 (osd.2) 192 : cluster [DBG] 11.3 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:32.271062+0000 osd.2 (osd.2) 193 : cluster [DBG] 11.3 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:03.681288+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:33.222293+0000 osd.2 (osd.2) 194 : cluster [DBG] 8.6 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:33.232858+0000 osd.2 (osd.2) 195 : cluster [DBG] 8.6 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1f deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 1515520 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1f deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 195)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:33.222293+0000 osd.2 (osd.2) 194 : cluster [DBG] 8.6 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:33.232858+0000 osd.2 (osd.2) 195 : cluster [DBG] 8.6 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:04.681457+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:34.190940+0000 osd.2 (osd.2) 196 : cluster [DBG] 10.1f deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:34.219159+0000 osd.2 (osd.2) 197 : cluster [DBG] 10.1f deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.f scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 1499136 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.f scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 197)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:34.190940+0000 osd.2 (osd.2) 196 : cluster [DBG] 10.1f deep-scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:34.219159+0000 osd.2 (osd.2) 197 : cluster [DBG] 10.1f deep-scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:05.681614+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:35.187481+0000 osd.2 (osd.2) 198 : cluster [DBG] 10.f scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:35.222782+0000 osd.2 (osd.2) 199 : cluster [DBG] 10.f scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 1499136 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 199)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:35.187481+0000 osd.2 (osd.2) 198 : cluster [DBG] 10.f scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:35.222782+0000 osd.2 (osd.2) 199 : cluster [DBG] 10.f scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:06.681781+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:36.153594+0000 osd.2 (osd.2) 200 : cluster [DBG] 10.4 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:36.199500+0000 osd.2 (osd.2) 201 : cluster [DBG] 10.4 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcaa9000/0x0/0x4ffc00000, data 0xdc316/0x172000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 122 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 1458176 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 811691 data_alloc: 218103808 data_used: 45056
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 201)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:36.153594+0000 osd.2 (osd.2) 200 : cluster [DBG] 10.4 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:36.199500+0000 osd.2 (osd.2) 201 : cluster [DBG] 10.4 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:07.681944+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:37.192787+0000 osd.2 (osd.2) 202 : cluster [DBG] 10.1 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:37.228093+0000 osd.2 (osd.2) 203 : cluster [DBG] 10.1 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 1449984 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 203)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:37.192787+0000 osd.2 (osd.2) 202 : cluster [DBG] 10.1 scrub starts
Oct 09 10:05:05 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:37.228093+0000 osd.2 (osd.2) 203 : cluster [DBG] 10.1 scrub ok
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:08.682100+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 1433600 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:09.682204+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fcaa0000/0x0/0x4ffc00000, data 0xe23ae/0x17b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 1433600 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:10.682340+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 1417216 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e(unlocked)] enter Initial
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=0 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000079 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=0 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000034
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000122 1 0.000048
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000029 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000161 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:11.682520+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 1392640 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 820599 data_alloc: 218103808 data_used: 53248
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.767282 2 0.000052
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.767477 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.767500 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000084 1 0.000131
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 128 handle_osd_map epochs [128,128], i have 128, src has [1,128]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:12.682690+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1376256 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:13.682814+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 1343488 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _renew_subs
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.973302841s of 11.008710861s, submitted: 32
Oct 09 10:05:05 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.920757 5 0.000039
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=97) [2] r=0 lpr=97 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 45.663154 94 0.002040
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=97) [2] r=0 lpr=97 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 45.664701 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=97) [2] r=0 lpr=97 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 46.669357 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=97) [2] r=0 lpr=97 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 46.669400 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=97) [2] r=0 lpr=97 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336996078s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 active pruub 227.930297852s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] exit Reset 0.000316 1 0.000677
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] exit Start 0.000047 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 129 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 40'632 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002385 4 0.000096
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 40'632 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 40'632 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000056 1 0.000045
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 40'632 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035652 1 0.000093
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.069951 1 0.000072
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.108230 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.029027 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000197 1 0.000281
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.107543 3 0.000118
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.107638 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000128 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000101 1 0.000555
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000032
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000025 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000185 1 0.000220
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=29
Oct 09 10:05:05 compute-2 ceph-osd[11347]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=29
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001188 3 0.000093
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:14.682969+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fca96000/0x0/0x4ffc00000, data 0xe8562/0x184000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1318912 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.015862 4 0.000066
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.015980 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.014927 2 0.000066
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.016367 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/70 les/c/f=131/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001208 3 0.000092
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/70 les/c/f=131/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/70 les/c/f=131/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/70 les/c/f=131/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.188305 5 0.000535
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000101 1 0.000097
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000646 1 0.000096
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.069841 2 0.000086
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:15.683135+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fca90000/0x0/0x4ffc00000, data 0xec658/0x18a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.466386 1 0.000109
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 0.725557 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 1.741565 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 1.741587 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462431908s) [0] async=[0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 40'1059 active pruub 234.905334473s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] exit Reset 0.000091 1 0.000158
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] enter Started
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] enter Start
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] exit Start 0.000006 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] enter Started/Stray
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 132 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 1302528 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 132 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd7274f00
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:16.683290+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.007853 7 0.000095
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000067 1 0.000098
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 DELETING pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038244 2 0.000163
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.038378 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.046311 0 0.000000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1228800 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835697 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:17.683435+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1228800 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:18.683578+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 1212416 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:19.683733+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1171456 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:20.683907+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1155072 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:21.684050+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1146880 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835697 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:22.684183+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1138688 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:23.684334+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1130496 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:24.684445+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1122304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:25.684581+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1122304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:26.684717+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1122304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835697 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:27.684899+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1114112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:28.685015+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1122304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:29.685168+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1114112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:30.685316+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1114112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:31.685429+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 1105920 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835697 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:32.685557+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5fc00
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 1097728 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd6ded0e0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:33.685701+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 ms_handle_reset con 0x55bdd6d47000 session 0x55bdd6d6f2c0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 1097728 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:34.685857+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 1089536 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:35.685987+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1064960 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:36.686102+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 1056768 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835697 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:37.686230+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 1056768 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:38.686349+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.992691040s of 25.019613266s, submitted: 36
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 1040384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:39.686464+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 1040384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:40.686613+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 1040384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:41.686730+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 1032192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835106 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:42.686855+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 1032192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:43.687023+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 1024000 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:44.687143+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 1024000 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:45.687253+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 1015808 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:46.687407+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 1007616 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835778 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:47.687519+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 999424 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:48.687648+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 999424 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:49.687802+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c9000
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.494864464s of 10.498138428s, submitted: 2
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 991232 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:50.687970+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 983040 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:51.688091+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 974848 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:52.688230+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 974848 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:53.688345+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 942080 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:54.688453+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd5d7f0e0
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 942080 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:55.688598+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 933888 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:56.688722+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 933888 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:57.689425+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 933888 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:58.689551+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 933888 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:59.689678+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 933888 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:00.689818+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 917504 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:01.689983+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 917504 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:02.690136+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 909312 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:03.690274+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 901120 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:04.690403+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 901120 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:05.690502+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 892928 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:06.690641+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 892928 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:07.690790+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 892928 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:08.690925+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 868352 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:09.691062+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 868352 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:10.691204+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 860160 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:11.691304+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 851968 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:12.691412+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 851968 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:13.691514+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:14.691654+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:15.691773+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:16.691883+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 835584 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:17.692009+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 835584 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:18.692111+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 811008 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:19.692209+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 802816 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:20.692332+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 794624 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:21.692448+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 794624 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:22.692544+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 794624 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:23.692676+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 794624 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:24.692814+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 786432 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:25.692885+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 778240 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:26.692998+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 761856 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:27.693105+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 761856 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:28.693206+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 745472 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:29.693341+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 745472 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:30.693456+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 745472 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:31.693563+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 737280 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:32.693666+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 737280 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:33.693762+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:34.693891+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:35.693995+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 712704 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:36.694096+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 712704 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:37.694197+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 712704 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:38.694294+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 688128 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:39.694392+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 688128 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:40.694508+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 688128 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:41.694611+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:42.694720+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:43.694865+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:44.694996+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 835584 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:45.695130+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 827392 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:46.695261+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 819200 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:47.695381+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 819200 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:48.695543+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 802816 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:49.695663+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 794624 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:50.695830+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 786432 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:51.695985+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 786432 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:52.696096+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 786432 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:53.696231+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 786432 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:54.696364+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 778240 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:55.696496+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 778240 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:56.696655+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 770048 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:57.696814+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 770048 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:58.696985+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 770048 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:59.697119+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 761856 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:00.697272+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 761856 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:01.697401+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 753664 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:02.697513+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 753664 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:03.697659+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 737280 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:04.697813+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 729088 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:05.697957+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 729088 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:06.698094+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:07.698257+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:08.698394+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:09.698509+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 712704 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:10.698669+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 704512 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:11.698809+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:05 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 696320 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:12.698890+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 696320 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:13.699029+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 679936 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:05 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:14.699142+0000)
Oct 09 10:05:05 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 679936 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:05 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 85.944450378s of 85.945846558s, submitted: 1
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:15.699256+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 671744 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:16.699407+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 663552 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:17.699586+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 663552 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:18.699715+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 655360 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:19.699876+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 647168 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:20.700009+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 647168 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:21.700121+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 638976 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:22.700285+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 638976 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:23.700410+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 630784 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:24.700532+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 622592 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:25.700654+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 622592 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:26.700790+0000)
Oct 09 10:05:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:05.999 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 614400 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:27.700903+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 614400 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:28.701011+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 606208 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:29.701122+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 589824 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:30.701266+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 589824 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:31.701389+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 589824 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:32.701495+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 581632 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:33.701600+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 573440 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:34.701703+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 565248 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:35.701807+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 565248 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:36.701921+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 565248 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:37.702024+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 557056 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:38.702153+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 565248 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:39.702259+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 557056 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:40.702387+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 557056 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:41.702496+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 557056 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:42.702616+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 548864 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:43.702757+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 540672 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:44.702904+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 540672 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:45.703028+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 532480 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:46.703173+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 532480 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:47.703328+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 524288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:48.703443+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 524288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:49.703573+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 524288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:50.704432+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 516096 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:51.704586+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 516096 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:52.704728+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 ms_handle_reset con 0x55bdd50c9000 session 0x55bdd5d7ed20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 507904 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:53.704888+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 507904 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:54.705017+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 491520 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:55.705149+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 491520 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:56.705290+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 491520 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:57.705385+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 466944 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:58.705489+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 466944 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:59.705586+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 466944 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:00.705731+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 458752 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:01.705855+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 458752 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:02.705962+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 442368 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:03.706061+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 434176 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:04.706171+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 434176 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:05.706291+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 425984 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:06.706403+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 425984 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:07.706509+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 417792 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:08.706622+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 409600 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:09.706718+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c9000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 54.127048492s of 54.128883362s, submitted: 1
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 409600 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:10.706831+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 401408 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:11.706936+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 401408 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:12.707074+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 393216 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:13.707168+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 385024 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:14.707320+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 376832 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:15.707412+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 376832 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:16.707516+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 368640 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:17.707643+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 360448 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:18.707734+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 344064 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:19.707870+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 344064 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:20.707987+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 344064 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:21.708092+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 335872 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:22.708186+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 327680 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:23.708280+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 319488 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:24.708372+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 319488 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:25.708470+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 311296 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:26.708594+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 311296 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:27.708708+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 311296 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:28.708803+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 303104 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:29.708906+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 303104 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:30.709063+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 303104 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:31.709155+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 294912 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:32.709244+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 294912 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:33.709350+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 286720 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:34.709458+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 286720 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:35.709549+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 286720 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:36.709683+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 278528 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:37.709779+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 270336 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:38.709891+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 253952 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:39.709984+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 245760 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:40.710084+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 245760 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:41.710211+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:42.710329+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 237568 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:43.710381+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 229376 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:44.710807+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 229376 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:45.710971+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 221184 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:46.711102+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 221184 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:47.711360+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 212992 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:48.711458+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 212992 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:49.711555+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 204800 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:50.711783+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 188416 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:51.711884+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 188416 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:52.712058+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 180224 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:53.712193+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 172032 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:54.712503+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 155648 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:55.712637+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 155648 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:56.712740+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 155648 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:57.712858+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 147456 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:58.713018+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 139264 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:59.713254+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 131072 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:00.713428+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 114688 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:01.713562+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 114688 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:02.713725+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 106496 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:03.713865+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 106496 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:04.713984+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 98304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:05.714089+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 98304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:06.714202+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 90112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:07.714359+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 90112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:08.714499+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 90112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:09.714632+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 81920 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:10.714759+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 73728 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:11.714893+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 73728 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:12.715228+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 65536 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:13.715348+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 57344 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:14.715446+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 49152 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:15.715561+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 40960 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:16.715657+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 40960 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:17.715890+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 32768 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:18.715990+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 24576 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:19.716097+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 16384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:20.716234+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 8192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:21.716334+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 8192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:22.716456+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 0 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:23.716559+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 0 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:24.716660+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 0 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:25.716794+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1032192 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:26.716945+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1032192 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:27.717050+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1032192 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:28.717168+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1024000 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:29.717282+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1024000 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:30.717398+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1015808 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:31.717501+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1015808 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:32.717608+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 1007616 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:33.717698+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 991232 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:34.717800+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 983040 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:35.717933+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 974848 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:36.718040+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 974848 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:37.718929+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 974848 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:38.719076+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 966656 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:39.719235+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 958464 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:40.719418+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 950272 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:41.719537+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 950272 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:42.719690+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 950272 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:43.719807+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 942080 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:44.719910+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 942080 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:45.720005+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 925696 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:46.720144+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 925696 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:47.720251+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 917504 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:48.720386+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 901120 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:49.720517+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 901120 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:50.720644+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 892928 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:51.720748+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 892928 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:52.720856+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 884736 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:53.720949+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 884736 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:54.721055+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 868352 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:55.721174+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 868352 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:56.721299+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 868352 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:57.721423+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 860160 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:58.721533+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 860160 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:59.721641+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 851968 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:00.721772+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 851968 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:01.721885+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 851968 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:02.721989+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 843776 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:03.722107+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 843776 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:04.722222+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 843776 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:05.722310+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 835584 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:06.722400+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 835584 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:07.722504+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 827392 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:08.722604+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 802816 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:09.722730+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 802816 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:10.722926+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 786432 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:11.723026+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 786432 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:12.723140+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 786432 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:13.723246+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 778240 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:14.723356+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 778240 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:15.723457+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 770048 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:16.723574+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 770048 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:17.723665+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 770048 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:18.723769+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 761856 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:19.723874+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 753664 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:20.723995+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 745472 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:21.724096+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 745472 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:22.724193+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 745472 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:23.724298+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 720896 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:24.724399+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 720896 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:25.724528+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 712704 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:26.724648+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 712704 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:27.724747+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 704512 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:28.724857+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 688128 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:29.724957+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 688128 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:30.725083+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 671744 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:31.725187+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 671744 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:32.725335+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 663552 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:33.725434+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 671744 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:34.725568+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 663552 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:35.725673+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 663552 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:36.725784+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 663552 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:37.725918+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 655360 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:38.726070+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 647168 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:39.726181+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 647168 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:40.726318+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 638976 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:41.726455+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 638976 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:42.726567+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 630784 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:43.726691+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 622592 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:44.726781+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 614400 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:45.726875+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 614400 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:46.726971+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 614400 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:47.727104+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 606208 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:48.727238+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 598016 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:49.727352+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 598016 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:50.727472+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 589824 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:51.727576+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 589824 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:52.727688+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 581632 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:53.727795+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 581632 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:54.727893+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 581632 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:55.727985+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 573440 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:56.728077+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 565248 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:57.728165+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 565248 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:58.728257+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 548864 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:59.728366+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 548864 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:00.728471+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 548864 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:01.728585+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 540672 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:02.728683+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 540672 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:03.728776+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 516096 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:04.728882+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 516096 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:05.728979+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 516096 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:06.729073+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 516096 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:07.729164+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 516096 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:08.729267+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 499712 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:09.729401+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 499712 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:10.729518+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 491520 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:11.729625+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 491520 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:12.729724+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 491520 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:13.729822+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 483328 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:14.729926+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 483328 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:15.730010+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 466944 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:16.730105+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 466944 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:17.730217+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 458752 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:18.730308+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 458752 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:19.730410+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 458752 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:20.730529+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 450560 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:21.730631+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 450560 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:22.730721+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 442368 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:23.730818+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 442368 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:24.730883+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 442368 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:25.730981+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 434176 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:26.731079+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 434176 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:27.731167+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 434176 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:28.731262+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 425984 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:29.731411+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 425984 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:30.731594+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 417792 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:31.731746+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 417792 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:32.731868+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 409600 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:33.731983+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 409600 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:34.732107+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 409600 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:35.732259+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 409600 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 5980 writes, 26K keys, 5980 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5980 writes, 983 syncs, 6.08 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5980 writes, 26K keys, 5980 commit groups, 1.0 writes per commit group, ingest: 19.15 MB, 0.03 MB/s
                                           Interval WAL: 5980 writes, 983 syncs, 6.08 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a590#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a590#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a590#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:36.732400+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 344064 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:37.732550+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 344064 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:38.732706+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 335872 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:39.732859+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 335872 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:40.733039+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 319488 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:41.733188+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 319488 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:42.733324+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 319488 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:43.733437+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 303104 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:44.733588+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 303104 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 215.829528809s of 215.830673218s, submitted: 1
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:45.733683+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 90112 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:46.733806+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1048576 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:47.733909+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1048576 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:48.734470+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:49.734575+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:50.734722+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:51.734813+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:52.734881+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:53.734975+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:54.735109+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:55.735201+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:56.735345+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:57.735440+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:58.735597+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:59.735691+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:00.735798+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:01.735873+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:02.735962+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:03.736056+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:04.736155+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:05.737455+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:06.737567+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:07.737677+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:08.737790+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 1032192 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:09.737887+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 1032192 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:10.738001+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 1024000 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:11.738092+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 1024000 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:12.738184+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 1024000 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:13.738282+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 1015808 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:14.738392+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 1015808 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:15.738494+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 1007616 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:16.738592+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 1007616 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:17.738686+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 1007616 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:18.738784+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 991232 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:19.738882+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 991232 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:20.738996+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 974848 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:21.739092+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 974848 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:22.739187+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 966656 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:23.739317+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 966656 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:24.739421+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 966656 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:25.739519+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 958464 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:26.739627+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 958464 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:27.739741+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 950272 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:28.739872+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 950272 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:29.740023+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 950272 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:30.740187+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 942080 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:31.740290+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 942080 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:32.740387+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 933888 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:33.740491+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 925696 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:34.740598+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 917504 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:35.740713+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 917504 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:36.740813+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 917504 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:37.740907+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 909312 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:38.741003+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 909312 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:39.741100+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 909312 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:40.741201+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 884736 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:41.741289+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 884736 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:42.741389+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 876544 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:43.741484+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 876544 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:44.741581+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 876544 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:45.741675+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 868352 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:46.741779+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 868352 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:47.741891+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 868352 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:48.741994+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 860160 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:49.742087+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 860160 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:50.742201+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 851968 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:51.742294+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 851968 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:52.742388+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 843776 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:53.742481+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 835584 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:54.742590+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 835584 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:55.742686+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 827392 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:56.742783+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 827392 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:57.742893+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 819200 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:58.742999+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 819200 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:59.743093+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 819200 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:00.743206+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 794624 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:01.743308+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 794624 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:02.743414+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 786432 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:03.743516+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 778240 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:04.743687+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 778240 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:05.743786+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 770048 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:06.743899+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 770048 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:07.744018+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 761856 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:08.744115+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 761856 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:09.744203+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 753664 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:10.744308+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 753664 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:11.744400+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 753664 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:12.744505+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 745472 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:13.744614+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 745472 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:14.744717+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 737280 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:15.744818+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 737280 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:16.744875+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 737280 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:17.745010+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 729088 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:18.745116+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 729088 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:19.745260+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 720896 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:20.745432+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 704512 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:21.745567+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 704512 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:22.745650+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 696320 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:23.745739+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 696320 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:24.745864+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 688128 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:25.745973+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 688128 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:26.746071+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 688128 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:27.746164+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 679936 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:28.746271+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 679936 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:29.746647+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 671744 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:30.746774+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 671744 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:31.746933+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 663552 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:32.747027+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 663552 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:33.747123+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 663552 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:34.747282+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:35.747401+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:36.747502+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:37.747606+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:38.747715+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:39.747820+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:40.747985+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 630784 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:41.748091+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 630784 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:42.748189+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 630784 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:43.748320+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 622592 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:44.748416+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 622592 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:45.748513+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 622592 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:46.748615+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 622592 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:47.748733+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 622592 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:48.748854+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:49.748943+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:50.749053+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:51.749148+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:52.749248+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:53.749340+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:54.749441+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:55.749542+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:56.749649+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:57.749750+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:58.749868+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:59.749979+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:00.750095+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 598016 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:01.750199+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:02.750359+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:03.750453+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:04.750555+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:05.750657+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:06.750771+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:07.750865+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:08.750953+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:09.751054+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:10.751164+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:11.751255+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:12.751357+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 581632 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:13.751482+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:14.751585+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:15.751688+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:16.751816+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:17.751943+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:18.752054+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:19.752148+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:20.752262+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:21.752356+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:22.752455+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:23.752550+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:24.752647+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:25.752741+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:26.752867+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:27.753019+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:28.753120+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 548864 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:29.753252+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 548864 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:30.753372+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 548864 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:31.753471+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 548864 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:32.753573+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 548864 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:33.753692+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:34.753783+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:35.753897+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:36.753993+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:37.754117+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:38.754210+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:39.754307+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:40.754424+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:41.754527+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:42.754625+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:43.754727+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:44.754824+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:45.754985+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:46.755158+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:47.755278+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:48.755372+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:49.755487+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:50.755639+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 516096 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:51.755753+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 507904 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:52.755866+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 507904 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:53.755983+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:54.756087+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:55.756188+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:56.756340+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:57.756465+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:58.756581+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:59.756703+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:00.756816+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:01.756927+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:02.757025+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:03.757121+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:04.757251+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:05.757343+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:06.757443+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:07.757561+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:08.757667+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:09.757762+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:10.757885+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:11.758018+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:12.758156+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:13.758283+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:14.758398+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:15.758504+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:16.758606+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:17.758702+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:18.758809+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 466944 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:19.758880+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:20.758988+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:21.759094+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:22.759191+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:23.759292+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:24.759414+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:25.759557+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:26.759662+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:27.759759+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:28.759875+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:29.759989+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:30.760104+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:31.760229+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:32.760346+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:33.760543+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 442368 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:34.760705+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 442368 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:35.760833+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 442368 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:36.760969+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 442368 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:37.761062+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 434176 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:38.761210+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 434176 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:39.761354+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 417792 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:40.761475+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 417792 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:41.761660+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 417792 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:42.761788+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 417792 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:43.761878+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:44.762036+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:45.762166+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:46.762261+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:47.762351+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:48.762446+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:49.762537+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:50.762647+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:51.762750+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:52.762786+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:53.762884+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:54.762982+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:55.763079+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:56.763178+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:57.763294+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:58.763379+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:59.763480+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:00.763609+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:01.763730+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:02.763853+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:03.763948+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:04.764059+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:05.764172+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:06.764285+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:07.764402+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 368640 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:08.764511+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 368640 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:09.764614+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 368640 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:10.764741+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 368640 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:11.764877+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 368640 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:12.764976+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:13.765079+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:14.765510+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:15.765608+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:16.765718+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:17.765878+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:18.765992+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 344064 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:19.766130+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:20.766251+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:21.766402+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:22.766492+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:23.766581+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:24.766711+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:25.766875+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 319488 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:26.767015+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 319488 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:27.767127+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 319488 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:28.767223+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:29.767373+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:30.767541+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:31.767657+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:32.767805+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:33.767939+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:34.768100+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:35.768222+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:36.768337+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:37.768465+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:38.768687+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 294912 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:39.768810+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 294912 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:40.768950+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:41.769079+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:42.769219+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:43.769363+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:44.769501+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:45.769617+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:46.769727+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:47.769870+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:48.769994+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:49.770118+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:50.770257+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:51.770384+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:52.770482+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:53.770613+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:54.770742+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:55.770893+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:56.771020+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:57.771157+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:58.771326+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:59.771463+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:00.771596+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 253952 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:01.771729+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 253952 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:02.771864+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 253952 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:03.771998+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 245760 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:04.772149+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 237568 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:05.772272+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 237568 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:06.772397+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 237568 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:07.772533+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 237568 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:08.772633+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 237568 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:09.772749+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:10.772869+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:11.772962+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:12.773066+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:13.773172+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:14.773297+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:15.773423+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 221184 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:16.773529+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 221184 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:17.773623+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 221184 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:18.773735+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 221184 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:19.773866+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 221184 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:20.773980+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:21.774088+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:22.774207+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:23.774324+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:24.774439+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:25.774560+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:26.774685+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:27.774810+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:28.774904+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:29.775001+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:30.775116+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:31.775242+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:32.775366+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:33.775495+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:34.775614+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:35.775729+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:36.775886+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:37.775997+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:38.776121+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 196608 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:39.776257+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 196608 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:40.776373+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:41.776477+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:42.776585+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:43.776739+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:44.776872+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:45.777073+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:46.777179+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:47.777273+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:48.777393+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:49.777509+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:50.777646+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:51.777769+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:52.777879+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:53.777968+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 188416 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:54.778066+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:55.778162+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:56.778289+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:57.778418+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:58.778510+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:59.778615+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:00.778781+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:01.778902+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:02.779014+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:03.779113+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:04.779239+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:05.779380+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:06.779526+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:07.779663+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:08.779789+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:09.779917+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:10.780071+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:11.780195+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:12.780366+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:13.780481+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:14.780623+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:15.780757+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:16.780890+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:17.780991+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:18.781091+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:19.781206+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:20.781320+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:21.781426+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:22.781541+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:23.781657+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:24.781811+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:25.781981+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:26.782148+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:27.782264+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:28.782362+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:29.782506+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:30.782657+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:31.782813+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:32.782918+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:33.783023+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:34.783163+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:35.783318+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:36.783432+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:37.783539+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:38.783669+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:39.783826+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:40.783992+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:41.784107+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:42.784223+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:43.784325+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:44.784430+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:45.784579+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:46.784687+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:47.784812+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:48.784898+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:49.785000+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:50.785132+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:51.785257+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:52.785379+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:53.785586+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:54.785773+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:55.785929+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:56.786148+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:57.786306+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:58.786454+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:59.786527+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:00.786712+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:01.786894+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:02.787075+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:03.787432+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:04.787572+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:05.787696+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:06.787871+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:07.788016+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:08.788202+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:09.788342+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:10.788507+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:11.788650+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:12.788809+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:13.788967+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:14.789099+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:15.789220+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:16.789347+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:17.789508+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:18.789661+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 122880 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:19.789778+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 122880 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:20.790008+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 122880 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:21.790116+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 122880 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:22.790286+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 122880 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:23.790432+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:24.790596+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:25.790714+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:26.790906+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:27.791038+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:28.791156+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:29.791301+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:30.791435+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:31.791546+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:32.791660+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:33.791767+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:34.791941+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:35.792063+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:36.792232+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:37.792358+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:38.792505+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:39.792639+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:40.792817+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:41.792994+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:42.793110+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:43.793318+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:44.793561+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:45.793663+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:46.793980+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:47.794104+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:48.794208+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:49.794327+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:50.794496+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:51.794618+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:52.794805+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:53.794908+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:54.795013+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:55.795156+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:56.795248+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:57.795350+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:58.795490+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 90112 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:59.795593+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 90112 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:00.795723+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 90112 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:01.795876+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5fc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 81920 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:02.796006+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _renew_subs
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 497.047088623s of 497.167572021s, submitted: 220
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 65536 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:03.796141+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 16769024 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929854 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:04.796355+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _renew_subs
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 136 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd75c6d20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 16769024 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe0e000/0x0/0x4ffc00000, data 0xd68845/0xe0c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:05.796508+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _renew_subs
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 137 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd75c6f00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:06.796655+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:07.796782+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe08000/0x0/0x4ffc00000, data 0xd6a980/0xe11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:08.796947+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942313 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe08000/0x0/0x4ffc00000, data 0xd6a980/0xe11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:09.797124+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:10.797322+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:11.797469+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:12.797580+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe08000/0x0/0x4ffc00000, data 0xd6a980/0xe11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.789459229s of 10.829751968s, submitted: 37
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:13.797775+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943207 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:14.797994+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:15.798154+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:16.798325+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:17.798471+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:18.798626+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943207 data_alloc: 218103808 data_used: 57344
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:19.798729+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:20.798913+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:21.799393+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:22.799554+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:23.799659+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943359 data_alloc: 218103808 data_used: 61440
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:24.799809+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:25.799939+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:26.800085+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:27.800247+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:28.800352+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943359 data_alloc: 218103808 data_used: 61440
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:29.800488+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:30.800658+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:31.800818+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:32.801004+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:33.801160+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943359 data_alloc: 218103808 data_used: 61440
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:34.801310+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:35.801429+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 16744448 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:36.801582+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 16744448 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:37.801738+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 16744448 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:38.801898+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16736256 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943359 data_alloc: 218103808 data_used: 61440
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:39.802066+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16736256 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:40.802238+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16736256 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:41.802368+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd75c74a0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d47000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 ms_handle_reset con 0x55bdd6d47000 session 0x55bdd75c7860
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c8c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 ms_handle_reset con 0x55bdd50c8c00 session 0x55bdd75c7a40
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:42.802523+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5fc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd75c7c20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd75c7e00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:43.802680+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd75dbe00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d47000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 5234688 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973607 data_alloc: 234881024 data_used: 11530240
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:44.802859+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 138 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.399969101s of 31.402891159s, submitted: 12
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 5234688 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:45.802999+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd6d47000 session 0x55bdd8ddc000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c8800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd50c8800 session 0x55bdd8ddc3c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5fc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd8ddd2c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd8ddda40
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd8ddcd20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 10698752 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:46.803173+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 10698752 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:47.803338+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d47000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd6d47000 session 0x55bdd8de45a0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb53e000/0x0/0x4ffc00000, data 0x1632be0/0x16dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 10698752 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:48.803525+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb53e000/0x0/0x4ffc00000, data 0x1632be0/0x16dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0c000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd8de4780
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 10698752 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051961 data_alloc: 234881024 data_used: 11530240
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:49.803690+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5fc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd8de4960
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd8de4b40
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 10756096 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:50.803904+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 10756096 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _renew_subs
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:51.804010+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 4874240 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:52.804121+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 4874240 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:53.804259+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fb53a000/0x0/0x4ffc00000, data 0x1634bd5/0x16e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 4857856 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108960 data_alloc: 234881024 data_used: 16879616
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:54.804424+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 4857856 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fb53a000/0x0/0x4ffc00000, data 0x1634bd5/0x16e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:55.804589+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4825088 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:56.804760+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4825088 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:57.804893+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4825088 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:58.805040+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4825088 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108960 data_alloc: 234881024 data_used: 16879616
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:59.805219+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4825088 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fb53a000/0x0/0x4ffc00000, data 0x1634bd5/0x16e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:00.805431+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4825088 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.508440018s of 16.597480774s, submitted: 91
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:01.805532+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f97c9000/0x0/0x4ffc00000, data 0x21f8bd5/0x22a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 105594880 unmapped: 4489216 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:02.805665+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:03.805775+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207958 data_alloc: 234881024 data_used: 17833984
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:04.805912+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:05.806132+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:06.806286+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f97a4000/0x0/0x4ffc00000, data 0x222bbd5/0x22d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:07.806384+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:08.806549+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207054 data_alloc: 234881024 data_used: 17838080
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:09.806642+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:10.806809+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104038400 unmapped: 6045696 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:11.806922+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f97a1000/0x0/0x4ffc00000, data 0x222ebd5/0x22db000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104038400 unmapped: 6045696 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:12.807074+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104038400 unmapped: 6045696 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:13.807224+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 6037504 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207662 data_alloc: 234881024 data_used: 17899520
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:14.807342+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.325393677s of 13.400735855s, submitted: 135
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 6037504 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:15.807473+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f97a0000/0x0/0x4ffc00000, data 0x222fbd5/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 6037504 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:16.807614+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 6029312 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:17.807755+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 6029312 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:18.807891+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f97a0000/0x0/0x4ffc00000, data 0x222fbd5/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 6029312 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207886 data_alloc: 234881024 data_used: 17899520
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:19.808006+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5ec00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5ec00 session 0x55bdd75db4a0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 5431296 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd75db860
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:20.808191+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436b800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436b800 session 0x55bdd75c6d20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436b800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436b800 session 0x55bdd6d63c20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14336000 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:21.808327+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14336000 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:22.808468+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14336000 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8df4000/0x0/0x4ffc00000, data 0x2bdac37/0x2c88000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:23.808574+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd5d7cd20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14303232 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283605 data_alloc: 234881024 data_used: 17903616
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:24.808742+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14295040 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5ec00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5ec00 session 0x55bdd5d7c3c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:25.808869+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5fc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd6d683c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.297891617s of 11.330360413s, submitted: 36
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd6d68960
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106102784 unmapped: 14614528 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:26.808999+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106102784 unmapped: 14614528 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:27.809131+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436b800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 9428992 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:28.809231+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8df3000/0x0/0x4ffc00000, data 0x2bdac5a/0x2c89000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 5472256 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1350314 data_alloc: 251658240 data_used: 27828224
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:29.809368+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 5472256 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:30.809479+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 5472256 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:31.809649+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 5472256 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:32.809789+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 5455872 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:33.809900+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 5455872 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1350570 data_alloc: 251658240 data_used: 27832320
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8df2000/0x0/0x4ffc00000, data 0x2bdac5a/0x2c89000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:34.810045+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 5455872 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:35.810190+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 5455872 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 6965 writes, 28K keys, 6965 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6965 writes, 1430 syncs, 4.87 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 985 writes, 2603 keys, 985 commit groups, 1.0 writes per commit group, ingest: 2.82 MB, 0.00 MB/s
                                           Interval WAL: 985 writes, 447 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a590#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a590#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a590#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:36.810340+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115294208 unmapped: 5423104 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:37.810495+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.703389168s of 11.709489822s, submitted: 7
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 4284416 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:38.810642+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116006912 unmapped: 4710400 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1406758 data_alloc: 251658240 data_used: 27897856
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:39.810799+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116006912 unmapped: 4710400 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8755000/0x0/0x4ffc00000, data 0x3272c5a/0x3321000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:40.810973+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 4571136 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:41.811101+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 4571136 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:42.811228+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 4571136 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:43.811330+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8755000/0x0/0x4ffc00000, data 0x3272c5a/0x3321000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 4620288 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1405238 data_alloc: 251658240 data_used: 27901952
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:44.811481+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 4620288 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:45.811619+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 4382720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:46.811771+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 4382720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:47.811904+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.993879318s of 10.153537750s, submitted: 296
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436b800 session 0x55bdd6d69680
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 109699072 unmapped: 11018240 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd6be3e00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:48.812048+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f873a000/0x0/0x4ffc00000, data 0x3293c5a/0x3342000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218148 data_alloc: 234881024 data_used: 17891328
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:49.812214+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f938e000/0x0/0x4ffc00000, data 0x222fbd5/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:50.812397+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f938e000/0x0/0x4ffc00000, data 0x222fbd5/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:51.812528+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:52.812668+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:53.812778+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f938e000/0x0/0x4ffc00000, data 0x222fbd5/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218148 data_alloc: 234881024 data_used: 17891328
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:54.812892+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:55.813032+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:56.813173+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd8de4f00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5ec00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101974016 unmapped: 18743296 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5ec00 session 0x55bdd7b5cf00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:57.813340+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:58.813492+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007408 data_alloc: 218103808 data_used: 8847360
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:59.813645+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:00.813824+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:01.814033+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:02.814208+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:03.814306+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007408 data_alloc: 218103808 data_used: 8847360
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:04.814448+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:05.814557+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:06.814705+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:07.814863+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:08.814996+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007408 data_alloc: 218103808 data_used: 8847360
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:09.815129+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:10.815240+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:11.815334+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5fc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd6decd20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436b800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436b800 session 0x55bdd6d6fa40
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd6d6a000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:12.815429+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5ec00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5ec00 session 0x55bdd5d7d680
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.631416321s of 24.683015823s, submitted: 93
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd6d6f2c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd4a30780
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd75cf0e0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436b800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436b800 session 0x55bdd4fc1c20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd755fa40
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 102006784 unmapped: 29212672 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:13.815528+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 102006784 unmapped: 29212672 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095448 data_alloc: 218103808 data_used: 8847360
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:14.815680+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 102006784 unmapped: 29212672 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:15.815804+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5ec00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 102006784 unmapped: 29212672 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5ec00 session 0x55bdd755e5a0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:16.815932+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 102014976 unmapped: 29204480 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:17.816066+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ca0000/0x0/0x4ffc00000, data 0x1920bb2/0x19cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 23977984 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:18.816190+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 23977984 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176388 data_alloc: 234881024 data_used: 21004288
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:19.816352+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ca0000/0x0/0x4ffc00000, data 0x1920bb2/0x19cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 23945216 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ca0000/0x0/0x4ffc00000, data 0x1920bb2/0x19cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:20.816514+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 23912448 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:21.816631+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ca0000/0x0/0x4ffc00000, data 0x1920bb2/0x19cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 23879680 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:22.816781+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 23879680 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:23.816927+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 23879680 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176388 data_alloc: 234881024 data_used: 21004288
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:24.817072+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 23879680 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:25.817176+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ca0000/0x0/0x4ffc00000, data 0x1920bb2/0x19cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 23879680 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:26.817311+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.983060837s of 14.016182899s, submitted: 42
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 14106624 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:27.817404+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 14589952 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:28.817538+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92d2000/0x0/0x4ffc00000, data 0x22d5bb2/0x2381000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 14589952 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265302 data_alloc: 234881024 data_used: 21835776
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:29.817708+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92d2000/0x0/0x4ffc00000, data 0x22d5bb2/0x2381000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 14557184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:30.817898+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 14557184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:31.818035+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 14557184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:32.818169+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92d2000/0x0/0x4ffc00000, data 0x22d5bb2/0x2381000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 14516224 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:33.818307+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260078 data_alloc: 234881024 data_used: 21839872
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:34.818446+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:35.818548+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:36.818706+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92e9000/0x0/0x4ffc00000, data 0x22d7bb2/0x2383000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92e9000/0x0/0x4ffc00000, data 0x22d7bb2/0x2383000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:37.818876+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:38.819024+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260078 data_alloc: 234881024 data_used: 21839872
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:39.819155+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.858042717s of 12.936762810s, submitted: 133
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:40.819292+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:41.819413+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92e8000/0x0/0x4ffc00000, data 0x22d8bb2/0x2384000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 14483456 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:42.819558+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 14483456 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:43.819727+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 14483456 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260302 data_alloc: 234881024 data_used: 21839872
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:44.819880+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92e8000/0x0/0x4ffc00000, data 0x22d8bb2/0x2384000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 14442496 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:45.820023+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 14442496 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:46.820150+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 14442496 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:47.820273+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92e7000/0x0/0x4ffc00000, data 0x22d9bb2/0x2385000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd755ef00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 14434304 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:48.820385+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436b800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436b800 session 0x55bdd7b5d680
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021126 data_alloc: 218103808 data_used: 8847360
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:49.820521+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:50.820705+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:51.820856+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:52.821022+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:53.821149+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021126 data_alloc: 218103808 data_used: 8847360
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:54.821307+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:55.821438+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:56.821570+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:57.821703+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:58.821858+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021126 data_alloc: 218103808 data_used: 8847360
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:59.822004+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:00.822161+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:01.822270+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:02.822420+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:03.822548+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021126 data_alloc: 218103808 data_used: 8847360
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:04.822696+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:05.822826+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:06.822964+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:07.823095+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:08.823221+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.193244934s of 29.216753006s, submitted: 42
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd6be32c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5ec00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5ec00 session 0x55bdd4fc3e00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd6e230e0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c000 session 0x55bdd6e62000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c000 session 0x55bdd4fc3c20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 28540928 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101220 data_alloc: 218103808 data_used: 8847360
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:09.823347+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9000 session 0x55bdd4a334a0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436b800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106364928 unmapped: 28532736 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:10.823482+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106364928 unmapped: 28532736 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:11.823612+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106364928 unmapped: 28532736 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:12.823762+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ec7000/0x0/0x4ffc00000, data 0x16f9bb2/0x17a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106364928 unmapped: 28532736 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:13.823896+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106373120 unmapped: 28524544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101220 data_alloc: 218103808 data_used: 8847360
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:14.823997+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106373120 unmapped: 28524544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:15.824100+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106373120 unmapped: 28524544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:16.824231+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106373120 unmapped: 28524544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:17.824365+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ec7000/0x0/0x4ffc00000, data 0x16f9bb2/0x17a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd4fc34a0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:18.824507+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106373120 unmapped: 28524544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5ec00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5ec00 session 0x55bdd4fc2780
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd7b5c3c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.317389488s of 10.355058670s, submitted: 47
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c400 session 0x55bdd74754a0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:19.824658+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 28508160 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101553 data_alloc: 218103808 data_used: 8847360
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:20.824895+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 28508160 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:21.825050+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:22.825213+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ec6000/0x0/0x4ffc00000, data 0x16f9bd5/0x17a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:23.825390+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:24.825561+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167805 data_alloc: 234881024 data_used: 18460672
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:25.825705+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:26.825867+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ec6000/0x0/0x4ffc00000, data 0x16f9bd5/0x17a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:27.825970+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:28.826107+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:29.826207+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167805 data_alloc: 234881024 data_used: 18460672
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ec6000/0x0/0x4ffc00000, data 0x16f9bd5/0x17a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.569676399s of 10.576947212s, submitted: 6
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:30.826333+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 18022400 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:31.826425+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 18276352 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:32.826551+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 18251776 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:33.826737+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 18251776 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f949b000/0x0/0x4ffc00000, data 0x2123bd5/0x21d0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:34.826881+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 18251776 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250407 data_alloc: 234881024 data_used: 18644992
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:35.827016+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 18251776 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:36.827155+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:37.827276+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:38.827423+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:39.827566+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f947a000/0x0/0x4ffc00000, data 0x2145bd5/0x21f2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245743 data_alloc: 234881024 data_used: 18653184
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:40.827688+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:41.827889+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:42.828044+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.912234306s of 12.987756729s, submitted: 133
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:43.828185+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:44.828330+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245823 data_alloc: 234881024 data_used: 18653184
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9474000/0x0/0x4ffc00000, data 0x214bbd5/0x21f8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:45.828470+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4cc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4cc00 session 0x55bdd5d7fa40
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd7ae1a40
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:46.828623+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 18284544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8ac7000/0x0/0x4ffc00000, data 0x2af7c37/0x2ba5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:47.828750+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 18284544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:48.828881+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 18284544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:49.829028+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 18276352 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1326222 data_alloc: 234881024 data_used: 18653184
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:50.829265+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 18276352 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:51.829458+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 18276352 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8ac4000/0x0/0x4ffc00000, data 0x2afac37/0x2ba8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:52.829608+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 18276352 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.079881668s of 10.119346619s, submitted: 44
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d400 session 0x55bdd7ae1c20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:53.829749+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 17932288 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:54.829892+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 17932288 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332340 data_alloc: 234881024 data_used: 18657280
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4dc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:55.830032+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 8732672 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:56.830161+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 8732672 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8a9f000/0x0/0x4ffc00000, data 0x2b1ec5a/0x2bcd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:57.830297+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 8732672 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:58.830410+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 8732672 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:59.830548+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 8732672 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1399068 data_alloc: 251658240 data_used: 28499968
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:00.830682+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 8732672 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:01.830813+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 8732672 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8a9f000/0x0/0x4ffc00000, data 0x2b1ec5a/0x2bcd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:02.830947+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126222336 unmapped: 8675328 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:03.831074+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126222336 unmapped: 8675328 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.117259026s of 11.127627373s, submitted: 16
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:04.831211+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 4317184 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1484586 data_alloc: 251658240 data_used: 28954624
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:05.831376+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 5341184 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:06.831526+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 5341184 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:07.831712+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 5341184 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f7f87000/0x0/0x4ffc00000, data 0x3636c5a/0x36e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f7f87000/0x0/0x4ffc00000, data 0x3636c5a/0x36e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:08.831883+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 5275648 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:09.832034+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 5275648 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1490334 data_alloc: 251658240 data_used: 29265920
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:10.832196+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f7f66000/0x0/0x4ffc00000, data 0x3657c5a/0x3706000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 5120000 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:11.832346+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 5120000 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4dc00 session 0x55bdd75b50e0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d800 session 0x55bdd6d71680
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4cc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4cc00 session 0x55bdd5d7fc20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:12.832486+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124207104 unmapped: 11747328 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f946f000/0x0/0x4ffc00000, data 0x214ebd5/0x21fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:13.832670+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124207104 unmapped: 11747328 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:14.832771+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124207104 unmapped: 11747328 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260722 data_alloc: 234881024 data_used: 18653184
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.852729797s of 10.956905365s, submitted: 182
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd5d7ed20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f946f000/0x0/0x4ffc00000, data 0x214ebd5/0x21fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd75b5c20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:15.832922+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:16.833071+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:17.833187+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:18.833327+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:19.833441+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048656 data_alloc: 218103808 data_used: 8847360
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:20.833607+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa849000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:21.833747+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa849000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:22.833887+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:23.833982+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:24.834105+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048656 data_alloc: 218103808 data_used: 8847360
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:25.834226+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa849000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:26.834318+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:27.834474+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:28.834598+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:29.834746+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048656 data_alloc: 218103808 data_used: 8847360
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:30.834971+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:31.835133+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa849000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:32.835251+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d400 session 0x55bdd6d6ed20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4dc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4dc00 session 0x55bdd6d62000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd6b883c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4cc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4cc00 session 0x55bdd5d7c780
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:33.835354+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c400 session 0x55bdd6e625a0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118398976 unmapped: 17555456 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd73f2000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd73f2000 session 0x55bdd7ae0f00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.988206863s of 19.015766144s, submitted: 47
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269000 session 0x55bdd6d6b2c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269000 session 0x55bdd6e225a0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd75ced20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd73f2000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd73f2000 session 0x55bdd75cef00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c400 session 0x55bdd6d6af00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:34.835491+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 24526848 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114877 data_alloc: 234881024 data_used: 11534336
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:35.835637+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 24526848 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0de000/0x0/0x4ffc00000, data 0x14e2b60/0x158e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:36.835768+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 24526848 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:37.835886+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 24526848 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:38.835987+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 24526848 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:39.836093+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 24526848 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114877 data_alloc: 234881024 data_used: 11534336
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4cc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4cc00 session 0x55bdd75cf680
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:40.836331+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118931456 unmapped: 24510464 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:41.836427+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 23560192 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0dd000/0x0/0x4ffc00000, data 0x14e2b83/0x158f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:42.836542+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 22921216 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:43.836640+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 22921216 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:44.836749+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 22921216 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170027 data_alloc: 234881024 data_used: 19238912
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:45.836861+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 22921216 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0dd000/0x0/0x4ffc00000, data 0x14e2b83/0x158f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:46.836963+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 22921216 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:47.837087+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 22921216 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0dd000/0x0/0x4ffc00000, data 0x14e2b83/0x158f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:48.837232+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 22913024 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0dd000/0x0/0x4ffc00000, data 0x14e2b83/0x158f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:49.837358+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 22913024 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170027 data_alloc: 234881024 data_used: 19238912
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:50.837501+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 22913024 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:51.837664+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.775194168s of 17.811866760s, submitted: 39
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 22888448 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:52.837884+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 16539648 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:53.838075+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9579000/0x0/0x4ffc00000, data 0x2038b83/0x20e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 16539648 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:54.838264+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 16539648 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267201 data_alloc: 234881024 data_used: 19771392
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:55.838443+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 16539648 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:56.838577+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 16539648 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:57.838763+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 16539648 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:58.838925+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 16531456 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9579000/0x0/0x4ffc00000, data 0x2038b83/0x20e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:59.839069+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 16531456 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267201 data_alloc: 234881024 data_used: 19771392
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9579000/0x0/0x4ffc00000, data 0x2038b83/0x20e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:00.839227+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126918656 unmapped: 16523264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:01.839323+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126918656 unmapped: 16523264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:02.839504+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126918656 unmapped: 16523264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:03.839639+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126918656 unmapped: 16523264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:04.839821+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126918656 unmapped: 16523264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267201 data_alloc: 234881024 data_used: 19771392
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:05.840003+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126918656 unmapped: 16523264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9579000/0x0/0x4ffc00000, data 0x2038b83/0x20e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:06.840116+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126926848 unmapped: 16515072 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9579000/0x0/0x4ffc00000, data 0x2038b83/0x20e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:07.840543+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9579000/0x0/0x4ffc00000, data 0x2038b83/0x20e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126926848 unmapped: 16515072 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:08.840667+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126935040 unmapped: 16506880 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:09.840803+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126935040 unmapped: 16506880 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267353 data_alloc: 234881024 data_used: 19775488
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:10.841008+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126935040 unmapped: 16506880 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:11.841162+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126935040 unmapped: 16506880 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268800 session 0x55bdd72741e0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268c00 session 0x55bdd6d6af00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd75c6960
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7ba6400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7ba6400 session 0x55bdd75b41e0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7ba7c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.414363861s of 20.486953735s, submitted: 115
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7ba7c00 session 0x55bdd4a30780
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9579000/0x0/0x4ffc00000, data 0x2038b83/0x20e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268800 session 0x55bdd75b5c20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:12.841284+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 18423808 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:13.841404+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 18423808 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:14.841952+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 18423808 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1316636 data_alloc: 234881024 data_used: 19775488
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:15.842087+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 18423808 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268c00 session 0x55bdd7220960
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:16.842238+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd4fc21e0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125001728 unmapped: 18440192 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7ba6400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7ba6400 session 0x55bdd7b5cd20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c400 session 0x55bdd7b5d0e0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:17.842341+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 18423808 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8dd2000/0x0/0x4ffc00000, data 0x27ecbe5/0x289a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:18.842440+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 16072704 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:19.842585+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14262272 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1363761 data_alloc: 234881024 data_used: 26726400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:20.842736+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14262272 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:21.842919+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14262272 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:22.843100+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14262272 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.219516754s of 11.251511574s, submitted: 40
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:23.843198+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14262272 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8dd2000/0x0/0x4ffc00000, data 0x27ecbe5/0x289a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:24.843316+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14262272 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1364569 data_alloc: 234881024 data_used: 26726400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:25.843417+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14262272 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8dd0000/0x0/0x4ffc00000, data 0x27edbe5/0x289b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:26.843551+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129196032 unmapped: 14245888 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:27.843653+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 13918208 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:28.843811+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134766592 unmapped: 8675328 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:29.843931+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136036352 unmapped: 7405568 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1458543 data_alloc: 251658240 data_used: 27844608
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8426000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:30.844062+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136077312 unmapped: 7364608 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:31.844112+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136077312 unmapped: 7364608 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:32.844246+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136085504 unmapped: 7356416 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:33.844376+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136085504 unmapped: 7356416 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:34.844463+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136085504 unmapped: 7356416 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1458543 data_alloc: 251658240 data_used: 27844608
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.565647125s of 11.631405830s, submitted: 106
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:35.844587+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136085504 unmapped: 7356416 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8426000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:36.844661+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136118272 unmapped: 7323648 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:37.844722+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8426000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136159232 unmapped: 7282688 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:38.844896+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:39.845033+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451543 data_alloc: 251658240 data_used: 27832320
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:40.845175+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:41.845311+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:42.845403+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8435000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:43.845554+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:44.845711+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451543 data_alloc: 251658240 data_used: 27832320
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:45.845865+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:46.845959+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:47.846084+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:48.846183+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8435000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8435000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:49.846305+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451543 data_alloc: 251658240 data_used: 27832320
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:50.846408+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:51.846518+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:52.846620+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8435000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:53.846769+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:54.846864+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451543 data_alloc: 251658240 data_used: 27832320
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:55.846976+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:56.847085+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:57.847223+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:58.847357+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8435000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:59.847484+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.583915710s of 24.589715958s, submitted: 15
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451847 data_alloc: 251658240 data_used: 27832320
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8435000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:00.847591+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8433000/0x0/0x4ffc00000, data 0x318abe5/0x3238000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:01.847696+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:02.847804+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 7028736 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:03.847973+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 7028736 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8433000/0x0/0x4ffc00000, data 0x318abe5/0x3238000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:04.848107+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 7028736 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451847 data_alloc: 251658240 data_used: 27832320
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:05.848208+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 7028736 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:06.848340+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 7020544 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:07.848486+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 7020544 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:08.848582+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 7012352 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8433000/0x0/0x4ffc00000, data 0x318abe5/0x3238000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:09.848681+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136462336 unmapped: 6979584 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451847 data_alloc: 251658240 data_used: 27832320
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:10.848796+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 6971392 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8433000/0x0/0x4ffc00000, data 0x318abe5/0x3238000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:11.848896+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 6971392 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:12.849002+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 6971392 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:13.849116+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 6971392 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:14.849257+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8433000/0x0/0x4ffc00000, data 0x318abe5/0x3238000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.233993530s of 15.235481262s, submitted: 1
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 6971392 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1449663 data_alloc: 251658240 data_used: 27832320
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:15.849371+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 6971392 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268800 session 0x55bdd5d7d2c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268c00 session 0x55bdd5d7cf00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:16.849472+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd7b5d2c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131284992 unmapped: 12156928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:17.849571+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f946d000/0x0/0x4ffc00000, data 0x2039b83/0x20e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 12427264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:18.849683+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 12427264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f946d000/0x0/0x4ffc00000, data 0x2039b83/0x20e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:19.849813+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 12427264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269843 data_alloc: 234881024 data_used: 19759104
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:20.850003+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 12427264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:21.850100+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f946d000/0x0/0x4ffc00000, data 0x2039b83/0x20e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 12427264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd75cfe00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269000 session 0x55bdd7275c20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd7ae10e0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:22.850198+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:23.850305+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:24.850401+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074339 data_alloc: 234881024 data_used: 11534336
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:25.850511+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:26.850641+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:27.850772+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:28.850873+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:29.850995+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074339 data_alloc: 234881024 data_used: 11534336
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:30.851140+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:31.851270+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:32.851351+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:33.851468+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:34.851634+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074339 data_alloc: 234881024 data_used: 11534336
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:35.851774+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:36.851913+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:37.852048+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:38.852179+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268800 session 0x55bdd72745a0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268c00 session 0x55bdd7475e00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd6f7d2c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7ba6400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7ba6400 session 0x55bdd75ce000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 23.875421524s of 23.928354263s, submitted: 95
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd75cf860
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268800 session 0x55bdd6dec000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125640704 unmapped: 28368896 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268c00 session 0x55bdd755e000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd755e5a0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c000 session 0x55bdd755e3c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:39.852311+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125648896 unmapped: 28360704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135665 data_alloc: 234881024 data_used: 11534336
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:40.852456+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125648896 unmapped: 28360704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c000 session 0x55bdd755e780
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd6d6f2c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:41.852586+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125648896 unmapped: 28360704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268800 session 0x55bdd6d6fc20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268c00 session 0x55bdd6d6ef00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0e3000/0x0/0x4ffc00000, data 0x14ddbb2/0x1589000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:42.852728+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125698048 unmapped: 28311552 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:43.852883+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 28270592 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:44.852992+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 28270592 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191655 data_alloc: 234881024 data_used: 19226624
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:45.853113+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0e2000/0x0/0x4ffc00000, data 0x14ddbd5/0x158a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 28270592 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0e2000/0x0/0x4ffc00000, data 0x14ddbd5/0x158a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:46.853245+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 28270592 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd6d6f0e0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c800 session 0x55bdd6f7c780
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:47.853348+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd6dec960
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:48.853497+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:49.853638+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083131 data_alloc: 234881024 data_used: 11534336
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:50.853799+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:51.853907+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:52.854034+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:53.854166+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:54.854304+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083131 data_alloc: 234881024 data_used: 11534336
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:55.854465+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:56.854594+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:57.854735+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd4e094a0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c9c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd7221860
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0c000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd6d6b2c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0c000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd75da780
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c9c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.361354828s of 19.447809219s, submitted: 108
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd75cf680
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd7ae1860
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c800 session 0x55bdd6be23c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd6be2b40
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd6be3e00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:58.854888+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31956992 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:59.855023+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31956992 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116223 data_alloc: 234881024 data_used: 11534336
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:00.855184+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0c000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd6dec000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31956992 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c9c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd6ded0e0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:01.855331+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd6dec5a0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31956992 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c800 session 0x55bdd4a30780
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0c000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:02.855425+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31875072 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:03.855518+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa45f000/0x0/0x4ffc00000, data 0x1160b83/0x120d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31875072 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:04.855665+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa45f000/0x0/0x4ffc00000, data 0x1160b83/0x120d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c800 session 0x55bdd6d6a3c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd7275e00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31875072 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146352 data_alloc: 234881024 data_used: 15314944
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c9c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd4a33a40
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:05.855811+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:06.855973+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:07.856077+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:08.856253+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:09.856423+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088438 data_alloc: 234881024 data_used: 11534336
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:10.856604+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:11.856770+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:12.856940+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:13.857079+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:14.857201+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088438 data_alloc: 234881024 data_used: 11534336
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:15.857334+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:16.857441+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:17.857595+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:18.857718+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:19.857835+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088438 data_alloc: 234881024 data_used: 11534336
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:20.858059+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:21.858190+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:22.858337+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.317052841s of 24.347005844s, submitted: 33
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd6be2b40
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd6dec000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 32071680 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa315000/0x0/0x4ffc00000, data 0x12abbb2/0x1357000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:23.858496+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 32071680 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:24.858636+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 32071680 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136711 data_alloc: 234881024 data_used: 11534336
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:25.858773+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 32071680 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd7ae1860
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:26.858886+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 32071680 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:27.859021+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0c000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 32071680 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:28.859151+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 31678464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa315000/0x0/0x4ffc00000, data 0x12abbb2/0x1357000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:29.859282+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 31678464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169087 data_alloc: 234881024 data_used: 15511552
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa315000/0x0/0x4ffc00000, data 0x12abbb2/0x1357000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:30.859438+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 31678464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:31.859547+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 31678464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:32.859699+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 31678464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:33.859859+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 31678464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:34.859964+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 31678464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169087 data_alloc: 234881024 data_used: 15511552
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:35.860083+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa315000/0x0/0x4ffc00000, data 0x12abbb2/0x1357000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122339328 unmapped: 31670272 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:36.860216+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122339328 unmapped: 31670272 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.554158211s of 14.583094597s, submitted: 35
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:37.860308+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f90f1000/0x0/0x4ffc00000, data 0x20bfbb2/0x216b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 21413888 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:38.860425+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 23248896 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:39.860546+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 23248896 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279137 data_alloc: 234881024 data_used: 15622144
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:40.860712+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 23240704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:41.860818+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 23240704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:42.860916+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9062000/0x0/0x4ffc00000, data 0x214ebb2/0x21fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 23240704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:43.861049+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 23240704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:44.861170+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 23240704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278745 data_alloc: 234881024 data_used: 15638528
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:45.861290+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9041000/0x0/0x4ffc00000, data 0x216fbb2/0x221b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 23232512 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:46.861428+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 23232512 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:47.861528+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 23232512 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.235909462s of 11.308976173s, submitted: 148
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:48.861646+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd6d6b2c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 23232512 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c9c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd75b4b40
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:49.861754+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 27590656 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097307 data_alloc: 234881024 data_used: 9961472
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cef000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:50.861891+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 27590656 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:51.861997+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cef000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 27590656 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:52.862118+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 27590656 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:53.862370+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 27590656 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:54.862510+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 27590656 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097307 data_alloc: 234881024 data_used: 9961472
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:55.862663+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 27590656 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:56.862795+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:57.862951+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cef000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:58.863107+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:59.863262+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097307 data_alloc: 234881024 data_used: 9961472
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:00.863400+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:01.863526+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:02.863642+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cef000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:03.863777+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:04.863893+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097307 data_alloc: 234881024 data_used: 9961472
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd6d743c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c800 session 0x55bdd6e221e0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0c000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd6d6e780
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:05.864039+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c9c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd6d6eb40
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.061567307s of 17.076330185s, submitted: 25
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd4a301e0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd74752c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0d400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0d400 session 0x55bdd4a305a0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0c000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd75b5c20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c9c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd75cfc20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 33579008 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:06.864181+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ad3000/0x0/0x4ffc00000, data 0x16ddb60/0x1789000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 33579008 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:07.864296+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 33579008 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:08.864429+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 33579008 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd6decb40
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:09.864558+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ad3000/0x0/0x4ffc00000, data 0x16ddb60/0x1789000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd7b5c780
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c2000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2000 session 0x55bdd58a8f00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 33587200 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172702 data_alloc: 234881024 data_used: 9961472
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0c000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd6e23a40
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:10.864749+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c9c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 33554432 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:11.864874+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127950848 unmapped: 30261248 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:12.865018+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127950848 unmapped: 30261248 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:13.865152+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127950848 unmapped: 30261248 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:14.865297+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ad2000/0x0/0x4ffc00000, data 0x16ddb70/0x178a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127950848 unmapped: 30261248 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234717 data_alloc: 234881024 data_used: 18628608
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:15.865442+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127950848 unmapped: 30261248 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:16.865571+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ad2000/0x0/0x4ffc00000, data 0x16ddb70/0x178a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127950848 unmapped: 30261248 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:17.865713+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd4a330e0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c2400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2400 session 0x55bdd75ce3c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127959040 unmapped: 30253056 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c2800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2800 session 0x55bdd5d7f860
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c2c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2c00 session 0x55bdd72214a0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0c000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.769443512s of 12.800458908s, submitted: 35
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd7ae0f00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd6d6b860
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c2400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2400 session 0x55bdd7474b40
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:18.865812+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c2800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2800 session 0x55bdd74750e0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3000 session 0x55bdd4fc23c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ad2000/0x0/0x4ffc00000, data 0x16ddb70/0x178a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 29548544 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:19.865934+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 29548544 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1319022 data_alloc: 234881024 data_used: 18628608
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:20.866109+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133038080 unmapped: 25174016 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:21.866245+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c2800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131661824 unmapped: 26550272 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2800 session 0x55bdd6e22d20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f87c8000/0x0/0x4ffc00000, data 0x29e5b80/0x2a93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:22.866375+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131686400 unmapped: 26525696 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:23.866502+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:24.866650+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1467181 data_alloc: 251658240 data_used: 29065216
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:25.866789+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:26.866903+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:27.867070+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f87a3000/0x0/0x4ffc00000, data 0x2a09ba3/0x2ab8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:28.867193+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:29.867322+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1467181 data_alloc: 251658240 data_used: 29065216
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:30.867464+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f87a3000/0x0/0x4ffc00000, data 0x2a09ba3/0x2ab8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:31.867574+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f87a3000/0x0/0x4ffc00000, data 0x2a09ba3/0x2ab8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:32.867721+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.509668350s of 14.591269493s, submitted: 121
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147144704 unmapped: 11067392 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:33.867872+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 10510336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:34.868039+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 10510336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1566109 data_alloc: 251658240 data_used: 29638656
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:35.868178+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 10510336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6a2a000/0x0/0x4ffc00000, data 0x35e3ba3/0x3692000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:36.868299+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 10510336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:37.868435+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 10510336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:38.868556+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 10510336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:39.868686+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6a2a000/0x0/0x4ffc00000, data 0x35e3ba3/0x3692000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 10510336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1566125 data_alloc: 251658240 data_used: 29638656
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6a2a000/0x0/0x4ffc00000, data 0x35e3ba3/0x3692000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:40.868871+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147734528 unmapped: 10477568 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:41.869014+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147734528 unmapped: 10477568 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:42.869161+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6a2a000/0x0/0x4ffc00000, data 0x35e3ba3/0x3692000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 10444800 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:43.869306+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 10444800 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:44.869449+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 10444800 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1566125 data_alloc: 251658240 data_used: 29638656
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:45.869626+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 10444800 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:46.869758+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 10444800 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6a2a000/0x0/0x4ffc00000, data 0x35e3ba3/0x3692000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:47.869893+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 10444800 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:48.870041+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.815006256s of 15.883452415s, submitted: 119
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3400 session 0x55bdd6d75c20
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3800 session 0x55bdd7274000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3c00 session 0x55bdd4e09680
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 16498688 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:49.870160+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 16498688 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323675 data_alloc: 234881024 data_used: 18726912
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:50.870313+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 16498688 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:51.870462+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 16498688 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:52.870619+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd57dfe00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd57df680
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c2800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2800 session 0x55bdd58a9680
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8031000/0x0/0x4ffc00000, data 0x1fddb70/0x208a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:53.870901+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:54.871056+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128948 data_alloc: 234881024 data_used: 9830400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:55.871206+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:56.871361+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:57.871522+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:58.871711+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:59.871861+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128948 data_alloc: 234881024 data_used: 9830400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:00.872056+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:01.872162+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:02.872325+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:03.872428+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:04.872551+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128948 data_alloc: 234881024 data_used: 9830400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:05.872662+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:06.872798+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:07.872924+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:08.873061+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:09.873229+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128948 data_alloc: 234881024 data_used: 9830400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:10.873422+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.839834213s of 21.894390106s, submitted: 91
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3400 session 0x55bdd6d70960
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3800 session 0x55bdd4a30780
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3c00 session 0x55bdd6f7c5a0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3c00 session 0x55bdd7ae12c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd75b52c0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132218880 unmapped: 25993216 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:11.873553+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132218880 unmapped: 25993216 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:12.873726+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c2800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2800 session 0x55bdd6e230e0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3400 session 0x55bdd7b5c960
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132218880 unmapped: 25993216 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:13.873873+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3800 session 0x55bdd7b5d680
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3800 session 0x55bdd4e085a0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c2800
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132161536 unmapped: 26050560 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:14.874065+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f88fd000/0x0/0x4ffc00000, data 0x1713b73/0x17bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133365760 unmapped: 24846336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264737 data_alloc: 234881024 data_used: 18120704
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:15.874214+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:16.874366+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:17.874519+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f88fd000/0x0/0x4ffc00000, data 0x1713b73/0x17bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:18.874670+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:19.874816+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275377 data_alloc: 234881024 data_used: 19714048
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:20.874980+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:21.875093+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f88fd000/0x0/0x4ffc00000, data 0x1713b73/0x17bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:22.875253+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:23.875423+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.853632927s of 12.886682510s, submitted: 37
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142090240 unmapped: 16121856 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:24.875575+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6fd9000/0x0/0x4ffc00000, data 0x1e97b73/0x1f43000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346571 data_alloc: 234881024 data_used: 20361216
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:25.875717+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:26.875874+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:27.876048+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:28.876194+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:29.876352+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345475 data_alloc: 234881024 data_used: 20361216
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:30.876530+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6f16000/0x0/0x4ffc00000, data 0x1f5ab73/0x2006000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:31.876660+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6f16000/0x0/0x4ffc00000, data 0x1f5ab73/0x2006000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:32.876821+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:33.877011+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6f16000/0x0/0x4ffc00000, data 0x1f5ab73/0x2006000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:34.877147+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.167773247s of 11.241725922s, submitted: 103
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd4e08960
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2800 session 0x55bdd755e000
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd745cc00
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd745cc00 session 0x55bdd5d7f0e0
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:35.877260+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:36.877376+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:37.877532+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:38.877785+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:39.877969+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:40.878150+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:41.878263+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:42.878407+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:43.878565+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:44.878677+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:45.878820+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:46.878930+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:47.879086+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:48.879266+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:49.879412+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:50.879586+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:51.879689+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:52.879888+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:53.880055+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:54.880177+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:55.880336+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:56.880468+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:57.880627+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:58.880774+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:59.880928+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:00.881135+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:01.881251+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134799360 unmapped: 23412736 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:02.881366+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134799360 unmapped: 23412736 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:03.881546+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134799360 unmapped: 23412736 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:04.881694+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:05.881813+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134799360 unmapped: 23412736 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:06.881898+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134807552 unmapped: 23404544 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:07.882021+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134807552 unmapped: 23404544 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:08.882187+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134807552 unmapped: 23404544 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:09.882319+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134807552 unmapped: 23404544 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:10.882485+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134815744 unmapped: 23396352 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:11.882582+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134815744 unmapped: 23396352 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:12.882693+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134815744 unmapped: 23396352 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:13.882796+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134815744 unmapped: 23396352 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:14.882937+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 23388160 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:15.883106+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 23388160 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:16.883211+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 23388160 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:17.883321+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 23388160 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:18.883446+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 23388160 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:19.883554+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 23388160 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:20.883743+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 23388160 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:21.883897+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134832128 unmapped: 23379968 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:22.884048+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134832128 unmapped: 23379968 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:23.884186+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134832128 unmapped: 23379968 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:24.884359+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134832128 unmapped: 23379968 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:25.884485+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134832128 unmapped: 23379968 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:26.884584+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 23363584 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:27.884685+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 23363584 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:28.884793+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 23363584 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:29.884882+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 23363584 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:30.885005+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:05:06 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 23363584 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:31.885105+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 23363584 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:32.885209+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 23363584 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: do_command 'config diff' '{prefix=config diff}'
Oct 09 10:05:06 compute-2 ceph-osd[11347]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 09 10:05:06 compute-2 ceph-osd[11347]: do_command 'config show' '{prefix=config show}'
Oct 09 10:05:06 compute-2 ceph-osd[11347]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:33.885330+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: do_command 'counter dump' '{prefix=counter dump}'
Oct 09 10:05:06 compute-2 ceph-osd[11347]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134610944 unmapped: 23601152 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: do_command 'counter schema' '{prefix=counter schema}'
Oct 09 10:05:06 compute-2 ceph-osd[11347]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 09 10:05:06 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:34.885432+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134594560 unmapped: 23617536 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:05:06 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:35.885544+0000)
Oct 09 10:05:06 compute-2 ceph-osd[11347]: do_command 'log dump' '{prefix=log dump}'
Oct 09 10:05:06 compute-2 ceph-mon[5983]: from='client.17130 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-2 ceph-mon[5983]: from='client.26701 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-2 ceph-mon[5983]: from='client.26975 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-2 ceph-mon[5983]: pgmap v962: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:06 compute-2 ceph-mon[5983]: from='client.27002 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-2 ceph-mon[5983]: from='client.26725 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1164563443' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:05:06 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/62328478' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:05:06 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2194491156' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:05:06 compute-2 ceph-mon[5983]: from='client.26752 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-2 ceph-mon[5983]: from='client.26755 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/670162803' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 09 10:05:06 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1482807627' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:05:06 compute-2 ceph-mon[5983]: from='client.17202 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-2 ceph-mon[5983]: from='client.26782 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-2 ceph-mon[5983]: from='client.27050 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1284909676' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:05:06 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3781638129' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:05:06 compute-2 ceph-mon[5983]: from='client.17229 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-2 ceph-mon[5983]: from='client.26806 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3763114863' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:05:06 compute-2 ceph-mon[5983]: from='client.27068 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:06 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 09 10:05:06 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2884084467' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:05:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:05:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:06.800 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:05:06 compute-2 nova_compute[163961]: 2025-10-09 10:05:06.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:05:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:05:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:05:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:07 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:05:07 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 09 10:05:07 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3580093067' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:05:07 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2569366303' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:05:07 compute-2 ceph-mon[5983]: from='client.17256 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:07 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2593566875' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:05:07 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2884084467' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:05:07 compute-2 ceph-mon[5983]: from='client.26836 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:07 compute-2 ceph-mon[5983]: from='client.27092 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:07 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3798516040' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:05:07 compute-2 ceph-mon[5983]: from='client.17277 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:07 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/539467632' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:05:07 compute-2 ceph-mon[5983]: from='client.26863 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:07 compute-2 ceph-mon[5983]: from='client.27122 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:07 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1096470766' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 09 10:05:07 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/754514562' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 09 10:05:07 compute-2 ceph-mon[5983]: pgmap v963: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:05:07 compute-2 ceph-mon[5983]: from='client.17304 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:07 compute-2 ceph-mon[5983]: from='client.27140 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:07 compute-2 ceph-mon[5983]: from='client.27146 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:07 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3580093067' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:05:07 compute-2 crontab[173507]: (root) LIST (root)
Oct 09 10:05:07 compute-2 nova_compute[163961]: 2025-10-09 10:05:07.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:07 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Oct 09 10:05:07 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/68246502' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 09 10:05:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:08.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:08 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Oct 09 10:05:08 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/933132204' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 09 10:05:08 compute-2 nova_compute[163961]: 2025-10-09 10:05:08.167 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:05:08 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Oct 09 10:05:08 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/911082628' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 09 10:05:08 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Oct 09 10:05:08 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2179929761' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 09 10:05:08 compute-2 ceph-mon[5983]: from='client.26908 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:08 compute-2 ceph-mon[5983]: from='client.27161 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:08 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/68246502' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 09 10:05:08 compute-2 ceph-mon[5983]: from='client.27173 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:08 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2236945637' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:05:08 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/4220218664' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 09 10:05:08 compute-2 ceph-mon[5983]: from='client.26941 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:08 compute-2 ceph-mon[5983]: from='client.27194 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:08 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/933132204' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 09 10:05:08 compute-2 ceph-mon[5983]: from='client.26956 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:08 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3105571077' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 09 10:05:08 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/911082628' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 09 10:05:08 compute-2 ceph-mon[5983]: from='client.26971 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:08 compute-2 ceph-mon[5983]: from='client.27221 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:08 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2179929761' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 09 10:05:08 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Oct 09 10:05:08 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1308911326' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 09 10:05:08 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Oct 09 10:05:08 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/567861671' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 09 10:05:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:08.802 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:08 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Oct 09 10:05:08 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2020451013' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 09 10:05:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Oct 09 10:05:09 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/702201220' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 09 10:05:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Oct 09 10:05:09 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1411436454' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Oct 09 10:05:09 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3524500067' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/864826343' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 09 10:05:09 compute-2 ceph-mon[5983]: from='client.17385 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1308911326' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 09 10:05:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3954604759' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/567861671' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2020451013' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 09 10:05:09 compute-2 ceph-mon[5983]: from='client.17409 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3023508330' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 09 10:05:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/4228889836' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/702201220' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 09 10:05:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1411436454' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1943906294' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 09 10:05:09 compute-2 ceph-mon[5983]: from='client.17439 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-2 ceph-mon[5983]: pgmap v964: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/470775445' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 09 10:05:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2923247886' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 09 10:05:09 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3524500067' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Oct 09 10:05:09 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/4143261129' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Oct 09 10:05:09 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/884900180' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 09 10:05:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 09 10:05:09 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1984048702' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 10:05:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:10.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:10 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Oct 09 10:05:10 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1673871356' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 09 10:05:10 compute-2 podman[173974]: 2025-10-09 10:05:10.249155663 +0000 UTC m=+0.083749282 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct 09 10:05:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:05:10.284 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:05:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:05:10.284 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:05:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:05:10.284 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:05:10 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Oct 09 10:05:10 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/705132865' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 09 10:05:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3035430115' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 09 10:05:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/4143261129' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 09 10:05:10 compute-2 ceph-mon[5983]: from='client.17460 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3165839660' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 09 10:05:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1467015024' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 09 10:05:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/884900180' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 09 10:05:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/991276937' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 09 10:05:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2695539313' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 09 10:05:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1217789133' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 09 10:05:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1984048702' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 10:05:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2624478517' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 09 10:05:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3566093463' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 10:05:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1673871356' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 09 10:05:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3564578496' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 09 10:05:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/705132865' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 09 10:05:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/4032395893' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/212958408' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 09 10:05:10 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1794939133' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 09 10:05:10 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Oct 09 10:05:10 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/310330597' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:10.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:11 compute-2 systemd[1]: Starting Hostname Service...
Oct 09 10:05:11 compute-2 systemd[1]: Started Hostname Service.
Oct 09 10:05:11 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1464335686' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:11 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/310330597' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:11 compute-2 ceph-mon[5983]: from='client.27169 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:11 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/15960154' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 09 10:05:11 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1550517676' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 09 10:05:11 compute-2 ceph-mon[5983]: from='client.27178 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:11 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2451567545' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 09 10:05:11 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/4152597779' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 09 10:05:11 compute-2 ceph-mon[5983]: from='client.27199 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:11 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3056944751' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 09 10:05:11 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1744172385' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 09 10:05:11 compute-2 ceph-mon[5983]: from='client.27440 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:11 compute-2 ceph-mon[5983]: from='client.27428 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:11 compute-2 ceph-mon[5983]: from='client.27446 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:11 compute-2 ceph-mon[5983]: pgmap v965: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:05:11 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1183017444' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 10:05:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Oct 09 10:05:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2784783823' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 09 10:05:11 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Oct 09 10:05:11 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1049596796' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:11 compute-2 nova_compute[163961]: 2025-10-09 10:05:11.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:05:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:05:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:05:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:12 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:05:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:12.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:12 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Oct 09 10:05:12 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/499448165' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 09 10:05:12 compute-2 podman[174321]: 2025-10-09 10:05:12.272610951 +0000 UTC m=+0.104535964 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd)
Oct 09 10:05:12 compute-2 ceph-mon[5983]: from='client.27244 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2784783823' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 09 10:05:12 compute-2 ceph-mon[5983]: from='client.27470 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2255613207' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 09 10:05:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1760469970' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 09 10:05:12 compute-2 ceph-mon[5983]: from='client.27265 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/1103120249' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:05:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/1103120249' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:05:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/4038948101' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 09 10:05:12 compute-2 ceph-mon[5983]: from='client.27283 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1049596796' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/499448165' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 09 10:05:12 compute-2 ceph-mon[5983]: from='client.27298 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:12 compute-2 ceph-mon[5983]: from='client.17691 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:12 compute-2 ceph-mon[5983]: from='client.17682 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/4186151168' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 09 10:05:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/225349446' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 09 10:05:12 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Oct 09 10:05:12 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3642453026' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:12 compute-2 nova_compute[163961]: 2025-10-09 10:05:12.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:12 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Oct 09 10:05:12 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1009466217' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:12 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Oct 09 10:05:12 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4150804299' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 09 10:05:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:12.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:13 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:13 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:13 compute-2 sudo[174577]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:05:13 compute-2 sudo[174577]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:05:13 compute-2 sudo[174577]: pam_unix(sudo:session): session closed for user root
Oct 09 10:05:13 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Oct 09 10:05:13 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2364528424' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='client.17706 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='client.27548 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3642453026' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='client.17718 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1009466217' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='client.17724 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='client.27572 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/4150804299' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='client.27361 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2339899425' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='client.17748 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='client.27605 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='client.27391 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:13 compute-2 ceph-mon[5983]: pgmap v966: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/557993267' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='client.17808 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:13 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2364528424' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 09 10:05:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:14.009 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Oct 09 10:05:14 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2788345293' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 09 10:05:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 10:05:14 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                          ** DB Stats **
                                          Uptime(secs): 1800.0 total, 600.0 interval
                                          Cumulative writes: 5509 writes, 29K keys, 5509 commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.04 MB/s
                                          Cumulative WAL: 5509 writes, 5509 syncs, 1.00 writes per sync, written: 0.07 GB, 0.04 MB/s
                                          Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                          Interval writes: 1539 writes, 7583 keys, 1539 commit groups, 1.0 writes per commit group, ingest: 17.37 MB, 0.03 MB/s
                                          Interval WAL: 1539 writes, 1539 syncs, 1.00 writes per sync, written: 0.02 GB, 0.03 MB/s
                                          Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                          
                                          ** Compaction Stats [default] **
                                          Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    374.2      0.12              0.08        15    0.008       0      0       0.0       0.0
                                            L6      1/0   13.47 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.1    396.8    340.3      0.53              0.28        14    0.038     72K   7335       0.0       0.0
                                           Sum      1/0   13.47 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.1    325.4    346.4      0.64              0.36        29    0.022     72K   7335       0.0       0.0
                                           Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.0    354.9    361.8      0.21              0.12        10    0.021     30K   2536       0.0       0.0
                                          
                                          ** Compaction Stats [default] **
                                          Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                          ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    396.8    340.3      0.53              0.28        14    0.038     72K   7335       0.0       0.0
                                          High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    377.1      0.11              0.08        14    0.008       0      0       0.0       0.0
                                          User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.8      0.00              0.00         1    0.001       0      0       0.0       0.0
                                          
                                          Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                          
                                          Uptime(secs): 1800.0 total, 600.0 interval
                                          Flush(GB): cumulative 0.042, interval 0.010
                                          AddFile(GB): cumulative 0.000, interval 0.000
                                          AddFile(Total Files): cumulative 0, interval 0
                                          AddFile(L0 Files): cumulative 0, interval 0
                                          AddFile(Keys): cumulative 0, interval 0
                                          Cumulative compaction: 0.22 GB write, 0.12 MB/s write, 0.20 GB read, 0.12 MB/s read, 0.6 seconds
                                          Interval compaction: 0.07 GB write, 0.13 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.2 seconds
                                          Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                          Block cache BinnedLRUCache@0x5647939f1350#2 capacity: 304.00 MB usage: 16.57 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 9.3e-05 secs_since: 0
                                          Block cache entry stats(count,size,portion): DataBlock(937,15.99 MB,5.25854%) FilterBlock(29,216.98 KB,0.0697036%) IndexBlock(29,378.44 KB,0.121568%) Misc(1,0.00 KB,0%)
                                          
                                          ** File Read Latency Histogram By Level [default] **
Oct 09 10:05:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Oct 09 10:05:14 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3255535698' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 09 10:05:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/590546173' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 09 10:05:14 compute-2 ceph-mon[5983]: from='client.17829 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:14 compute-2 ceph-mon[5983]: from='client.27457 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3059809898' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 09 10:05:14 compute-2 ceph-mon[5983]: from='client.27689 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:14 compute-2 ceph-mon[5983]: from='client.17847 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2788345293' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 09 10:05:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/128357852' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 09 10:05:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3101704029' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:14 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3255535698' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 09 10:05:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Oct 09 10:05:14 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2029825744' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 09 10:05:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:05:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:14.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:05:14 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:14 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:15 compute-2 ceph-mon[5983]: from='client.17865 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:15 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/521439099' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 09 10:05:15 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2953610832' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 09 10:05:15 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:15 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:15 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2029825744' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 09 10:05:15 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1194589566' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 09 10:05:15 compute-2 ceph-mon[5983]: from='client.17883 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:15 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:15 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:15 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:05:15 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:05:15 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2696534015' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 09 10:05:15 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/704435951' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 09 10:05:15 compute-2 ceph-mon[5983]: pgmap v967: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:15 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2053053048' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 09 10:05:15 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Oct 09 10:05:15 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3292770511' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 09 10:05:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:05:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:16.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:05:16 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Oct 09 10:05:16 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1059604019' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 09 10:05:16 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Oct 09 10:05:16 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1855735972' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 09 10:05:16 compute-2 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct 09 10:05:16 compute-2 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct 09 10:05:16 compute-2 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct 09 10:05:16 compute-2 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct 09 10:05:16 compute-2 kernel: cfg80211: failed to load regulatory.db
Oct 09 10:05:16 compute-2 ceph-mon[5983]: from='client.27550 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:16 compute-2 ceph-mon[5983]: from='client.27788 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:16 compute-2 ceph-mon[5983]: from='client.17928 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:16 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3292770511' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 09 10:05:16 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2862235529' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 09 10:05:16 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1329928561' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 09 10:05:16 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1059604019' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 09 10:05:16 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1855735972' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 09 10:05:16 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/4274574662' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 09 10:05:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.002000021s ======
Oct 09 10:05:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:16.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000021s
Oct 09 10:05:16 compute-2 nova_compute[163961]: 2025-10-09 10:05:16.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:16 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Oct 09 10:05:16 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3703601036' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 09 10:05:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:05:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:05:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:05:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:05:17 compute-2 ceph-mon[5983]: from='client.27833 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:17 compute-2 ceph-mon[5983]: from='client.27592 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:17 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1952453794' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 09 10:05:17 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3952041900' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 09 10:05:17 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3703601036' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 09 10:05:17 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1880054577' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 09 10:05:17 compute-2 ceph-mon[5983]: from='client.27857 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:17 compute-2 ceph-mon[5983]: pgmap v968: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:05:17 compute-2 ceph-mon[5983]: from='client.27625 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:17 compute-2 nova_compute[163961]: 2025-10-09 10:05:17.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:17 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Oct 09 10:05:17 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2006132426' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 09 10:05:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:18.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:18 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Oct 09 10:05:18 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/260578882' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 09 10:05:18 compute-2 ceph-mon[5983]: from='client.17988 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:18 compute-2 ceph-mon[5983]: from='client.27869 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:18 compute-2 ceph-mon[5983]: from='client.27643 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:18 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2510247055' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 09 10:05:18 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1139436647' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 09 10:05:18 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2006132426' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 09 10:05:18 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1206458610' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 09 10:05:18 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/260578882' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 09 10:05:18 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1268401104' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Oct 09 10:05:18 compute-2 ovs-appctl[176173]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 09 10:05:18 compute-2 ovs-appctl[176180]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 09 10:05:18 compute-2 ovs-appctl[176186]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Oct 09 10:05:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:18.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Oct 09 10:05:19 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/133937591' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 09 10:05:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Oct 09 10:05:19 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1514504317' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 09 10:05:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:19 compute-2 podman[176500]: 2025-10-09 10:05:19.586782915 +0000 UTC m=+0.119336652 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct 09 10:05:19 compute-2 ceph-mon[5983]: from='client.27899 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:19 compute-2 ceph-mon[5983]: from='client.27905 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:19 compute-2 ceph-mon[5983]: from='client.18030 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:19 compute-2 ceph-mon[5983]: from='client.27914 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:19 compute-2 ceph-mon[5983]: from='client.27697 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:19 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2735777596' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Oct 09 10:05:19 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2325890707' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 09 10:05:19 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/133937591' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 09 10:05:19 compute-2 ceph-mon[5983]: from='client.18051 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:19 compute-2 ceph-mon[5983]: pgmap v969: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:19 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1843500510' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 09 10:05:19 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1514504317' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 09 10:05:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:20.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:20 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct 09 10:05:20 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3265058752' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:05:20 compute-2 ceph-mon[5983]: from='client.18066 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:05:20 compute-2 ceph-mon[5983]: from='client.27959 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:20 compute-2 ceph-mon[5983]: from='client.27965 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:20 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/977502647' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Oct 09 10:05:20 compute-2 ceph-mon[5983]: from='client.18084 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:20 compute-2 ceph-mon[5983]: from='client.27754 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:20 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1381470307' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Oct 09 10:05:20 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2853944145' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:05:20 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3265058752' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:05:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:20.812 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:20 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Oct 09 10:05:20 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1079050608' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 09 10:05:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:21 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Oct 09 10:05:21 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1461598608' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:21 compute-2 ceph-mon[5983]: from='client.18111 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:21 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1420165825' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 09 10:05:21 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1079050608' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 09 10:05:21 compute-2 ceph-mon[5983]: from='client.18126 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:21 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3769982294' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:21 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1461598608' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:21 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2681174729' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Oct 09 10:05:21 compute-2 ceph-mon[5983]: from='client.28028 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:21 compute-2 ceph-mon[5983]: pgmap v970: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:05:21 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1497053247' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Oct 09 10:05:21 compute-2 nova_compute[163961]: 2025-10-09 10:05:21.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:05:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:05:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:05:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:05:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:05:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:22.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:05:22 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Oct 09 10:05:22 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/697504947' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 09 10:05:22 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Oct 09 10:05:22 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3640841864' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:22 compute-2 nova_compute[163961]: 2025-10-09 10:05:22.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:22 compute-2 ceph-mon[5983]: from='client.28034 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:22 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2700875887' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:22 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1640101779' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:22 compute-2 ceph-mon[5983]: from='client.18165 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:22 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2969326191' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 09 10:05:22 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/697504947' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 09 10:05:22 compute-2 ceph-mon[5983]: from='client.18177 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:05:22 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/757521440' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:22 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3640841864' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:22 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/101271543' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:05:22 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/363929979' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 09 10:05:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:05:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:22.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:05:22 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs ls", "format": "json-pretty"} v 0)
Oct 09 10:05:22 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/298067314' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 09 10:05:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:23 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Oct 09 10:05:23 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3260254795' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:23 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/298067314' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 09 10:05:23 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/997421174' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Oct 09 10:05:23 compute-2 ceph-mon[5983]: from='client.28097 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:23 compute-2 ceph-mon[5983]: from='client.27871 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:23 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3180762565' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:23 compute-2 ceph-mon[5983]: pgmap v971: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:23 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3047035409' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:23 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3260254795' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:23 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Oct 09 10:05:23 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/207342957' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:05:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:24.019 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:05:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:24 compute-2 ceph-mon[5983]: from='client.18234 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:24 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2416067552' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:24 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/207342957' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:24 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3534344203' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:24 compute-2 ceph-mon[5983]: from='client.28136 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:24 compute-2 ceph-mon[5983]: from='client.27913 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:24 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1831731513' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 09 10:05:24 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2405584722' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Oct 09 10:05:24 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/4266558429' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 09 10:05:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:24.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:25 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Oct 09 10:05:25 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2843286844' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Oct 09 10:05:25 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/426215262' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-2 ceph-mon[5983]: from='client.28163 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3817141235' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-2 ceph-mon[5983]: from='client.27937 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-2 ceph-mon[5983]: from='client.28178 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1919356694' entity='client.admin' cmd=[{"prefix": "fs ls", "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-2 ceph-mon[5983]: from='client.27946 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3732170856' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-2 ceph-mon[5983]: pgmap v972: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:25 compute-2 ceph-mon[5983]: from='client.18315 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2843286844' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3379022171' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/426215262' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:26.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:26 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Oct 09 10:05:26 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2278152414' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-2 ceph-mon[5983]: from='client.28214 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3632668552' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-2 ceph-mon[5983]: from='client.27985 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-2 ceph-mon[5983]: from='client.28223 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/183718478' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-2 ceph-mon[5983]: from='client.27997 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3687400171' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-2 ceph-mon[5983]: from='client.18360 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2278152414' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1096908375' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/213647537' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:26.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:26 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Oct 09 10:05:26 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2375740516' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:26 compute-2 nova_compute[163961]: 2025-10-09 10:05:26.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:05:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:05:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:05:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:05:27 compute-2 nova_compute[163961]: 2025-10-09 10:05:27.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:27 compute-2 virtqemud[163507]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 09 10:05:27 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2375740516' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:27 compute-2 ceph-mon[5983]: from='client.28268 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:27 compute-2 ceph-mon[5983]: from='client.18387 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:27 compute-2 ceph-mon[5983]: from='client.28274 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:27 compute-2 ceph-mon[5983]: from='client.28280 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:27 compute-2 ceph-mon[5983]: pgmap v973: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:05:27 compute-2 ceph-mon[5983]: from='client.18399 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:27 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2732266303' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Oct 09 10:05:27 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/248433905' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:27 compute-2 systemd[1]: Starting Time & Date Service...
Oct 09 10:05:27 compute-2 systemd[1]: Started Time & Date Service.
Oct 09 10:05:27 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Oct 09 10:05:27 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2437616287' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:05:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:28.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:05:28 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Oct 09 10:05:28 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3010218255' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:28 compute-2 ceph-mon[5983]: from='client.28036 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:28 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2437616287' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:28 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3927175723' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:28 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2772078955' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:28 compute-2 ceph-mon[5983]: from='client.18429 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:28 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3010218255' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:05:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:28.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:05:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:29 compute-2 ceph-mon[5983]: from='client.18435 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:29 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3527154993' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:05:29 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1539775753' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Oct 09 10:05:29 compute-2 ceph-mon[5983]: pgmap v974: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:29 compute-2 ceph-mon[5983]: from='client.18450 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:30.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:30 compute-2 podman[178579]: 2025-10-09 10:05:30.239704509 +0000 UTC m=+0.072514870 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct 09 10:05:30 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Oct 09 10:05:30 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4242952019' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:30 compute-2 ceph-mon[5983]: from='client.18456 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:05:30 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2865180231' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:30 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/4242952019' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Oct 09 10:05:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:30.823 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:05:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:05:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:05:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:05:31 compute-2 ceph-mon[5983]: pgmap v975: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:05:31 compute-2 nova_compute[163961]: 2025-10-09 10:05:31.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:32.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:32 compute-2 nova_compute[163961]: 2025-10-09 10:05:32.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:05:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:32.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:05:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:33 compute-2 sudo[178603]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:05:33 compute-2 sudo[178603]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:05:33 compute-2 sudo[178603]: pam_unix(sudo:session): session closed for user root
Oct 09 10:05:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:05:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:34.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:05:34 compute-2 ceph-mon[5983]: pgmap v976: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:34.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:05:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:05:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:05:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:05:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:05:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:36.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:36 compute-2 ceph-mon[5983]: pgmap v977: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:36.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:36 compute-2 nova_compute[163961]: 2025-10-09 10:05:36.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:37 compute-2 nova_compute[163961]: 2025-10-09 10:05:37.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:38.039 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:38 compute-2 ceph-mon[5983]: pgmap v978: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:05:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:05:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:38.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:05:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:40.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:40 compute-2 ceph-mon[5983]: pgmap v979: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:05:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:40.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:05:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:05:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:05:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:05:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:05:41 compute-2 podman[178638]: 2025-10-09 10:05:41.209545148 +0000 UTC m=+0.040831679 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct 09 10:05:41 compute-2 nova_compute[163961]: 2025-10-09 10:05:41.899 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:42.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:42 compute-2 ceph-mon[5983]: pgmap v980: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:05:42 compute-2 nova_compute[163961]: 2025-10-09 10:05:42.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:42.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:43 compute-2 podman[178657]: 2025-10-09 10:05:43.210454547 +0000 UTC m=+0.044443199 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, config_id=multipathd)
Oct 09 10:05:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:44.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:44 compute-2 ceph-mon[5983]: pgmap v981: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:05:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:44.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:05:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:05:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:05:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:05:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:05:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:46.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:46 compute-2 ceph-mon[5983]: pgmap v982: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:46.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:46 compute-2 nova_compute[163961]: 2025-10-09 10:05:46.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:47 compute-2 sudo[178680]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:05:47 compute-2 sudo[178680]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:05:47 compute-2 sudo[178680]: pam_unix(sudo:session): session closed for user root
Oct 09 10:05:47 compute-2 sudo[178705]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 10:05:47 compute-2 sudo[178705]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:05:47 compute-2 sudo[178705]: pam_unix(sudo:session): session closed for user root
Oct 09 10:05:47 compute-2 nova_compute[163961]: 2025-10-09 10:05:47.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:05:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:48.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:05:48 compute-2 ceph-mon[5983]: pgmap v983: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:05:48 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:05:48 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:05:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:48.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:05:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:05:49 compute-2 ceph-mon[5983]: pgmap v984: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:05:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:05:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:05:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:05:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:05:49 compute-2 ceph-mon[5983]: pgmap v985: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:05:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:05:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:05:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:05:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:05:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:05:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:05:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:50.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:50 compute-2 podman[178762]: 2025-10-09 10:05:50.234410404 +0000 UTC m=+0.067497368 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 09 10:05:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:05:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:50.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:05:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:05:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:05:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:05:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:05:51 compute-2 nova_compute[163961]: 2025-10-09 10:05:51.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:52.053 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:52 compute-2 ceph-mon[5983]: pgmap v986: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:05:52 compute-2 nova_compute[163961]: 2025-10-09 10:05:52.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:52.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:52 compute-2 sudo[178788]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:05:52 compute-2 sudo[178788]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:05:52 compute-2 sudo[178788]: pam_unix(sudo:session): session closed for user root
Oct 09 10:05:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:53 compute-2 sudo[178813]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:05:53 compute-2 sudo[178813]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:05:53 compute-2 sudo[178813]: pam_unix(sudo:session): session closed for user root
Oct 09 10:05:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:05:53 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:05:53 compute-2 ceph-mon[5983]: pgmap v987: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:05:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:05:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:54.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:05:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:54.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:55 compute-2 nova_compute[163961]: 2025-10-09 10:05:55.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:05:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:05:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:05:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:05:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:05:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:05:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:05:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:56.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:05:56 compute-2 ceph-mon[5983]: pgmap v988: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:05:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:56.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:56 compute-2 nova_compute[163961]: 2025-10-09 10:05:56.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:57 compute-2 nova_compute[163961]: 2025-10-09 10:05:57.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:05:57 compute-2 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct 09 10:05:57 compute-2 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct 09 10:05:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:05:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:05:58.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:05:58 compute-2 ceph-mon[5983]: pgmap v989: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:05:58 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1189174758' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:05:58 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1898786243' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:05:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:05:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:05:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:05:58.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:05:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.180 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.181 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.198 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.198 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.198 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.198 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.198 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:05:59 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1831269240' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:05:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:05:59 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2688699616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.552 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:05:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.750 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.751 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4808MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.751 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.752 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.838 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.838 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 10:05:59 compute-2 nova_compute[163961]: 2025-10-09 10:05:59.877 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:05:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:05:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:05:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:05:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:00.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:00 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:06:00 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4016613573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:06:00 compute-2 nova_compute[163961]: 2025-10-09 10:06:00.238 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:06:00 compute-2 nova_compute[163961]: 2025-10-09 10:06:00.243 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:06:00 compute-2 nova_compute[163961]: 2025-10-09 10:06:00.254 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:06:00 compute-2 nova_compute[163961]: 2025-10-09 10:06:00.255 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 10:06:00 compute-2 nova_compute[163961]: 2025-10-09 10:06:00.255 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.504s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:06:00 compute-2 nova_compute[163961]: 2025-10-09 10:06:00.256 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:06:00 compute-2 nova_compute[163961]: 2025-10-09 10:06:00.256 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Oct 09 10:06:00 compute-2 ceph-mon[5983]: pgmap v990: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:06:00 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2111307278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:06:00 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2688699616' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:06:00 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/4016613573' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:06:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:00.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:06:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:06:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:06:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:01 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:06:01 compute-2 podman[178895]: 2025-10-09 10:06:01.210426137 +0000 UTC m=+0.045969869 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct 09 10:06:01 compute-2 nova_compute[163961]: 2025-10-09 10:06:01.258 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:06:01 compute-2 nova_compute[163961]: 2025-10-09 10:06:01.260 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 10:06:01 compute-2 nova_compute[163961]: 2025-10-09 10:06:01.261 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 10:06:01 compute-2 nova_compute[163961]: 2025-10-09 10:06:01.272 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 10:06:01 compute-2 nova_compute[163961]: 2025-10-09 10:06:01.273 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:06:01 compute-2 nova_compute[163961]: 2025-10-09 10:06:01.273 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:06:01 compute-2 nova_compute[163961]: 2025-10-09 10:06:01.273 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:06:01 compute-2 nova_compute[163961]: 2025-10-09 10:06:01.273 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 10:06:01 compute-2 ceph-mon[5983]: pgmap v991: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:06:01 compute-2 nova_compute[163961]: 2025-10-09 10:06:01.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:02.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:02 compute-2 nova_compute[163961]: 2025-10-09 10:06:02.183 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:06:02 compute-2 nova_compute[163961]: 2025-10-09 10:06:02.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:02.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:06:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:04.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:06:04 compute-2 nova_compute[163961]: 2025-10-09 10:06:04.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:06:04 compute-2 ceph-mon[5983]: pgmap v992: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:06:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:04.854 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:05 compute-2 nova_compute[163961]: 2025-10-09 10:06:05.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:06:05 compute-2 nova_compute[163961]: 2025-10-09 10:06:05.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:06:05 compute-2 nova_compute[163961]: 2025-10-09 10:06:05.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Oct 09 10:06:05 compute-2 nova_compute[163961]: 2025-10-09 10:06:05.187 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Oct 09 10:06:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:06:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:06:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:06:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:05 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:06:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:06 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:06:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:06.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:06 compute-2 sudo[171524]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:06 compute-2 sshd-session[171523]: Received disconnect from 192.168.122.10 port 53886:11: disconnected by user
Oct 09 10:06:06 compute-2 sshd-session[171523]: Disconnected from user zuul 192.168.122.10 port 53886
Oct 09 10:06:06 compute-2 sshd-session[171503]: pam_unix(sshd:session): session closed for user zuul
Oct 09 10:06:06 compute-2 systemd[1]: session-40.scope: Deactivated successfully.
Oct 09 10:06:06 compute-2 systemd[1]: session-40.scope: Consumed 2min 4.734s CPU time, 646.9M memory peak, read 184.7M from disk, written 211.2M to disk.
Oct 09 10:06:06 compute-2 systemd-logind[800]: Session 40 logged out. Waiting for processes to exit.
Oct 09 10:06:06 compute-2 systemd-logind[800]: Removed session 40.
Oct 09 10:06:06 compute-2 ceph-mon[5983]: pgmap v993: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:06:06 compute-2 sshd-session[178918]: Accepted publickey for zuul from 192.168.122.10 port 58640 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 10:06:06 compute-2 systemd-logind[800]: New session 42 of user zuul.
Oct 09 10:06:06 compute-2 systemd[1]: Started Session 42 of User zuul.
Oct 09 10:06:06 compute-2 sshd-session[178918]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 10:06:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 10:06:06 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3279 syncs, 3.42 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4248 writes, 15K keys, 4248 commit groups, 1.0 writes per commit group, ingest: 18.47 MB, 0.03 MB/s
                                           Interval WAL: 4248 writes, 1849 syncs, 2.30 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 09 10:06:06 compute-2 sudo[178922]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/cat /var/tmp/sos-osp/sosreport-compute-2-2025-10-09-qpnhxer.tar.xz
Oct 09 10:06:06 compute-2 sudo[178922]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:06:06 compute-2 sudo[178922]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:06 compute-2 sshd-session[178921]: Received disconnect from 192.168.122.10 port 58640:11: disconnected by user
Oct 09 10:06:06 compute-2 sshd-session[178921]: Disconnected from user zuul 192.168.122.10 port 58640
Oct 09 10:06:06 compute-2 sshd-session[178918]: pam_unix(sshd:session): session closed for user zuul
Oct 09 10:06:06 compute-2 systemd[1]: session-42.scope: Deactivated successfully.
Oct 09 10:06:06 compute-2 systemd-logind[800]: Session 42 logged out. Waiting for processes to exit.
Oct 09 10:06:06 compute-2 systemd-logind[800]: Removed session 42.
Oct 09 10:06:06 compute-2 sshd-session[178947]: Accepted publickey for zuul from 192.168.122.10 port 58648 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 10:06:06 compute-2 systemd-logind[800]: New session 43 of user zuul.
Oct 09 10:06:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:06:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:06.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:06:06 compute-2 systemd[1]: Started Session 43 of User zuul.
Oct 09 10:06:06 compute-2 sshd-session[178947]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 10:06:06 compute-2 nova_compute[163961]: 2025-10-09 10:06:06.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:06 compute-2 sudo[178951]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/rm -rf /var/tmp/sos-osp
Oct 09 10:06:06 compute-2 sudo[178951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:06:06 compute-2 sudo[178951]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:06 compute-2 sshd-session[178950]: Received disconnect from 192.168.122.10 port 58648:11: disconnected by user
Oct 09 10:06:06 compute-2 sshd-session[178950]: Disconnected from user zuul 192.168.122.10 port 58648
Oct 09 10:06:06 compute-2 sshd-session[178947]: pam_unix(sshd:session): session closed for user zuul
Oct 09 10:06:06 compute-2 systemd[1]: session-43.scope: Deactivated successfully.
Oct 09 10:06:06 compute-2 systemd-logind[800]: Session 43 logged out. Waiting for processes to exit.
Oct 09 10:06:06 compute-2 systemd-logind[800]: Removed session 43.
Oct 09 10:06:07 compute-2 ceph-mon[5983]: pgmap v994: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:06:07 compute-2 nova_compute[163961]: 2025-10-09 10:06:07.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:08.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:06:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:08.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:06:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:06:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:10.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:06:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:06:10.285 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:06:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:06:10.285 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:06:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:06:10.285 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:06:10 compute-2 ceph-mon[5983]: pgmap v995: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:06:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:10.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:06:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:06:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:10 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:06:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:11 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:06:11 compute-2 nova_compute[163961]: 2025-10-09 10:06:11.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:06:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:12.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:06:12 compute-2 podman[178981]: 2025-10-09 10:06:12.227241098 +0000 UTC m=+0.057587263 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct 09 10:06:12 compute-2 ceph-mon[5983]: pgmap v996: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:06:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/2347622312' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:06:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/2347622312' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:06:12 compute-2 nova_compute[163961]: 2025-10-09 10:06:12.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:06:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:12.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:06:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:13 compute-2 ceph-mon[5983]: pgmap v997: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:06:13 compute-2 sudo[178999]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:06:13 compute-2 sudo[178999]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:13 compute-2 sudo[178999]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:13 compute-2 podman[179023]: 2025-10-09 10:06:13.69198124 +0000 UTC m=+0.058999896 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Oct 09 10:06:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:14.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:14.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Oct 09 10:06:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Oct 09 10:06:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000004 to be held by another RGW process; skipping for now
Oct 09 10:06:15 compute-2 radosgw[12043]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Oct 09 10:06:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:06:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:06:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:15 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:06:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:16 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:06:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:16.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:16 compute-2 ceph-mon[5983]: pgmap v998: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:06:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:16.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:16 compute-2 nova_compute[163961]: 2025-10-09 10:06:16.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:17 compute-2 systemd[1]: Stopping User Manager for UID 1000...
Oct 09 10:06:17 compute-2 systemd[171508]: Activating special unit Exit the Session...
Oct 09 10:06:17 compute-2 systemd[171508]: Stopped target Main User Target.
Oct 09 10:06:17 compute-2 systemd[171508]: Stopped target Basic System.
Oct 09 10:06:17 compute-2 systemd[171508]: Stopped target Paths.
Oct 09 10:06:17 compute-2 systemd[171508]: Stopped target Sockets.
Oct 09 10:06:17 compute-2 systemd[171508]: Stopped target Timers.
Oct 09 10:06:17 compute-2 systemd[171508]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct 09 10:06:17 compute-2 systemd[171508]: Stopped Daily Cleanup of User's Temporary Directories.
Oct 09 10:06:17 compute-2 systemd[171508]: Closed D-Bus User Message Bus Socket.
Oct 09 10:06:17 compute-2 systemd[171508]: Stopped Create User's Volatile Files and Directories.
Oct 09 10:06:17 compute-2 systemd[171508]: Removed slice User Application Slice.
Oct 09 10:06:17 compute-2 systemd[171508]: Reached target Shutdown.
Oct 09 10:06:17 compute-2 systemd[171508]: Finished Exit the Session.
Oct 09 10:06:17 compute-2 systemd[171508]: Reached target Exit the Session.
Oct 09 10:06:17 compute-2 systemd[1]: user@1000.service: Deactivated successfully.
Oct 09 10:06:17 compute-2 systemd[1]: Stopped User Manager for UID 1000.
Oct 09 10:06:17 compute-2 systemd[1]: Stopping User Runtime Directory /run/user/1000...
Oct 09 10:06:17 compute-2 systemd[1]: run-user-1000.mount: Deactivated successfully.
Oct 09 10:06:17 compute-2 systemd[1]: user-runtime-dir@1000.service: Deactivated successfully.
Oct 09 10:06:17 compute-2 systemd[1]: Stopped User Runtime Directory /run/user/1000.
Oct 09 10:06:17 compute-2 systemd[1]: Removed slice User Slice of UID 1000.
Oct 09 10:06:17 compute-2 systemd[1]: user-1000.slice: Consumed 2min 5.130s CPU time, 653.1M memory peak, read 184.7M from disk, written 211.2M to disk.
Oct 09 10:06:17 compute-2 nova_compute[163961]: 2025-10-09 10:06:17.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:18.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:18 compute-2 ceph-mon[5983]: pgmap v999: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 0 B/s wr, 8 op/s
Oct 09 10:06:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:18.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:20.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:20 compute-2 ceph-mon[5983]: pgmap v1000: 337 pgs: 337 active+clean; 41 MiB data, 306 MiB used, 60 GiB / 60 GiB avail; 5.4 KiB/s rd, 0 B/s wr, 8 op/s
Oct 09 10:06:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:06:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:06:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:20.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:06:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:06:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:06:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:20 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:06:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:06:21 compute-2 podman[179049]: 2025-10-09 10:06:21.245398588 +0000 UTC m=+0.077178633 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct 09 10:06:21 compute-2 nova_compute[163961]: 2025-10-09 10:06:21.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:22.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:22 compute-2 ceph-mon[5983]: pgmap v1001: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Oct 09 10:06:22 compute-2 nova_compute[163961]: 2025-10-09 10:06:22.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:06:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:22.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:06:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:06:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:24.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:06:24 compute-2 ceph-mon[5983]: pgmap v1002: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Oct 09 10:06:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:24.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:06:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:06:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:25 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:06:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:26 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:06:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:06:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:26.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:06:26 compute-2 ceph-mon[5983]: pgmap v1003: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Oct 09 10:06:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:26.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:26 compute-2 nova_compute[163961]: 2025-10-09 10:06:26.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:27 compute-2 ceph-mon[5983]: pgmap v1004: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 0 B/s wr, 137 op/s
Oct 09 10:06:27 compute-2 nova_compute[163961]: 2025-10-09 10:06:27.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:06:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:28.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:06:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:06:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:28.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:06:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:30.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:30 compute-2 ceph-mon[5983]: pgmap v1005: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 78 KiB/s rd, 0 B/s wr, 129 op/s
Oct 09 10:06:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:06:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:30.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:06:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:06:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:06:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:30 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:06:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:06:31 compute-2 nova_compute[163961]: 2025-10-09 10:06:31.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:32.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:32 compute-2 podman[179083]: 2025-10-09 10:06:32.209682712 +0000 UTC m=+0.045304793 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2)
Oct 09 10:06:32 compute-2 ceph-mon[5983]: pgmap v1006: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 79 KiB/s rd, 0 B/s wr, 130 op/s
Oct 09 10:06:32 compute-2 nova_compute[163961]: 2025-10-09 10:06:32.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:06:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:32.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:06:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:33 compute-2 sudo[179103]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:06:33 compute-2 sudo[179103]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:33 compute-2 sudo[179103]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:06:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:34.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:06:34 compute-2 ceph-mon[5983]: pgmap v1007: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:06:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.002000021s ======
Oct 09 10:06:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:34.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000021s
Oct 09 10:06:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:06:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:06:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:06:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:35 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:06:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:06:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:36.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:36 compute-2 ceph-mon[5983]: pgmap v1008: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:06:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:36.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:36 compute-2 nova_compute[163961]: 2025-10-09 10:06:36.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:37 compute-2 nova_compute[163961]: 2025-10-09 10:06:37.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:06:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:38.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:06:38 compute-2 ceph-mon[5983]: pgmap v1009: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:06:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:38.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:40.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:40 compute-2 ceph-mon[5983]: pgmap v1010: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:06:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:06:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:40.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:06:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:06:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:06:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:40 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:06:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:06:41 compute-2 ceph-mon[5983]: pgmap v1011: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:06:41 compute-2 nova_compute[163961]: 2025-10-09 10:06:41.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:06:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:42.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:06:42 compute-2 nova_compute[163961]: 2025-10-09 10:06:42.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:42.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:43 compute-2 podman[179137]: 2025-10-09 10:06:43.222674486 +0000 UTC m=+0.048419491 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 09 10:06:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:44.111 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:44 compute-2 podman[179154]: 2025-10-09 10:06:44.208477694 +0000 UTC m=+0.044396436 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:06:44 compute-2 ceph-mon[5983]: pgmap v1012: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:06:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:06:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:44.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:06:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:06:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:06:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:45 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:06:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:06:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:46.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:46 compute-2 ceph-mon[5983]: pgmap v1013: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:06:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:46.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:46 compute-2 nova_compute[163961]: 2025-10-09 10:06:46.930 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:47 compute-2 nova_compute[163961]: 2025-10-09 10:06:47.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:48.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:48 compute-2 ceph-mon[5983]: pgmap v1014: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:06:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:06:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:48.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:06:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:50.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:50 compute-2 ceph-mon[5983]: pgmap v1015: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:06:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:06:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:50.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:06:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:06:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:50 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:06:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:06:51 compute-2 nova_compute[163961]: 2025-10-09 10:06:51.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:52.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:52 compute-2 podman[179179]: 2025-10-09 10:06:52.237508004 +0000 UTC m=+0.072257180 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct 09 10:06:52 compute-2 ceph-mon[5983]: pgmap v1016: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:06:52 compute-2 nova_compute[163961]: 2025-10-09 10:06:52.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:52.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:52 compute-2 sudo[179203]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:06:52 compute-2 sudo[179203]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:52 compute-2 sudo[179203]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:53 compute-2 sudo[179228]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 check-host
Oct 09 10:06:53 compute-2 sudo[179228]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:53 compute-2 sudo[179228]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:53 compute-2 sudo[179271]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:06:53 compute-2 sudo[179271]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:53 compute-2 sudo[179271]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:53 compute-2 sudo[179296]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 10:06:53 compute-2 sudo[179296]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:53 compute-2 sudo[179334]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:06:53 compute-2 sudo[179334]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:53 compute-2 sudo[179334]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:53 compute-2 sudo[179296]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:53 compute-2 sudo[179376]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:06:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:53 compute-2 sudo[179376]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:53 compute-2 sudo[179376]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:54 compute-2 sudo[179401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 list-networks
Oct 09 10:06:54 compute-2 sudo[179401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:54.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:54 compute-2 sudo[179401]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:54 compute-2 ceph-mon[5983]: pgmap v1017: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:06:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:54 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:54 compute-2 sudo[179443]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:06:54 compute-2 sudo[179443]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:54 compute-2 sudo[179443]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:54 compute-2 sudo[179468]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ceph-volume --fsid 286f8bf0-da72-5823-9a4e-ac4457d9e609 -- inventory --format=json-pretty --filter-for-batch
Oct 09 10:06:54 compute-2 sudo[179468]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:54 compute-2 podman[179525]: 2025-10-09 10:06:54.700517483 +0000 UTC m=+0.028966739 container create ba31dcb2b88856dbc775302f2a0b28e42d87f8a1fa1167176e0c9d38fd94f64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_clarke, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 10:06:54 compute-2 systemd[1]: Started libpod-conmon-ba31dcb2b88856dbc775302f2a0b28e42d87f8a1fa1167176e0c9d38fd94f64b.scope.
Oct 09 10:06:54 compute-2 systemd[1]: Started libcrun container.
Oct 09 10:06:54 compute-2 podman[179525]: 2025-10-09 10:06:54.746924269 +0000 UTC m=+0.075373545 container init ba31dcb2b88856dbc775302f2a0b28e42d87f8a1fa1167176e0c9d38fd94f64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Oct 09 10:06:54 compute-2 podman[179525]: 2025-10-09 10:06:54.751484636 +0000 UTC m=+0.079933893 container start ba31dcb2b88856dbc775302f2a0b28e42d87f8a1fa1167176e0c9d38fd94f64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_clarke, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct 09 10:06:54 compute-2 podman[179525]: 2025-10-09 10:06:54.752760732 +0000 UTC m=+0.081209988 container attach ba31dcb2b88856dbc775302f2a0b28e42d87f8a1fa1167176e0c9d38fd94f64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Oct 09 10:06:54 compute-2 magical_clarke[179538]: 167 167
Oct 09 10:06:54 compute-2 systemd[1]: libpod-ba31dcb2b88856dbc775302f2a0b28e42d87f8a1fa1167176e0c9d38fd94f64b.scope: Deactivated successfully.
Oct 09 10:06:54 compute-2 conmon[179538]: conmon ba31dcb2b88856dbc775 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ba31dcb2b88856dbc775302f2a0b28e42d87f8a1fa1167176e0c9d38fd94f64b.scope/container/memory.events
Oct 09 10:06:54 compute-2 podman[179525]: 2025-10-09 10:06:54.75941134 +0000 UTC m=+0.087860596 container died ba31dcb2b88856dbc775302f2a0b28e42d87f8a1fa1167176e0c9d38fd94f64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_clarke, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct 09 10:06:54 compute-2 systemd[1]: var-lib-containers-storage-overlay-982f31cdf356673884f8ebe54543c502073e113a5001422dd198eef4ad91db48-merged.mount: Deactivated successfully.
Oct 09 10:06:54 compute-2 podman[179525]: 2025-10-09 10:06:54.776660348 +0000 UTC m=+0.105109604 container remove ba31dcb2b88856dbc775302f2a0b28e42d87f8a1fa1167176e0c9d38fd94f64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_clarke, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 10:06:54 compute-2 podman[179525]: 2025-10-09 10:06:54.689162085 +0000 UTC m=+0.017611351 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:06:54 compute-2 systemd[1]: libpod-conmon-ba31dcb2b88856dbc775302f2a0b28e42d87f8a1fa1167176e0c9d38fd94f64b.scope: Deactivated successfully.
Oct 09 10:06:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:54.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:54 compute-2 podman[179560]: 2025-10-09 10:06:54.928982656 +0000 UTC m=+0.042325904 container create 70d951f2fd6c675105b4138a2fb44bf6ef7f6b5a76e542752c6ffc1ee9441d90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Oct 09 10:06:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:54 compute-2 systemd[1]: Started libpod-conmon-70d951f2fd6c675105b4138a2fb44bf6ef7f6b5a76e542752c6ffc1ee9441d90.scope.
Oct 09 10:06:54 compute-2 systemd[1]: Started libcrun container.
Oct 09 10:06:54 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8f15bff331b73cc3d2fd9d0ab00ac29763a020bd2536323b01b3055ff5aaf86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct 09 10:06:54 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8f15bff331b73cc3d2fd9d0ab00ac29763a020bd2536323b01b3055ff5aaf86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct 09 10:06:54 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8f15bff331b73cc3d2fd9d0ab00ac29763a020bd2536323b01b3055ff5aaf86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct 09 10:06:54 compute-2 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8f15bff331b73cc3d2fd9d0ab00ac29763a020bd2536323b01b3055ff5aaf86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct 09 10:06:54 compute-2 podman[179560]: 2025-10-09 10:06:54.991765644 +0000 UTC m=+0.105108892 container init 70d951f2fd6c675105b4138a2fb44bf6ef7f6b5a76e542752c6ffc1ee9441d90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct 09 10:06:54 compute-2 podman[179560]: 2025-10-09 10:06:54.996735984 +0000 UTC m=+0.110079222 container start 70d951f2fd6c675105b4138a2fb44bf6ef7f6b5a76e542752c6ffc1ee9441d90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Oct 09 10:06:54 compute-2 podman[179560]: 2025-10-09 10:06:54.997990569 +0000 UTC m=+0.111333838 container attach 70d951f2fd6c675105b4138a2fb44bf6ef7f6b5a76e542752c6ffc1ee9441d90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct 09 10:06:55 compute-2 podman[179560]: 2025-10-09 10:06:54.917561755 +0000 UTC m=+0.030905023 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Oct 09 10:06:55 compute-2 suspicious_booth[179573]: [
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:     {
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:         "available": false,
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:         "being_replaced": false,
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:         "ceph_device_lvm": false,
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:         "device_id": "QEMU_DVD-ROM_QM00001",
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:         "lsm_data": {},
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:         "lvs": [],
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:         "path": "/dev/sr0",
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:         "rejected_reasons": [
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "Insufficient space (<5GB)",
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "Has a FileSystem"
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:         ],
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:         "sys_api": {
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "actuators": null,
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "device_nodes": [
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:                 "sr0"
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             ],
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "devname": "sr0",
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "human_readable_size": "474.00 KB",
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "id_bus": "ata",
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "model": "QEMU DVD-ROM",
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "nr_requests": "64",
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "parent": "/dev/sr0",
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "partitions": {},
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "path": "/dev/sr0",
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "removable": "1",
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "rev": "2.5+",
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "ro": "0",
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "rotational": "0",
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "sas_address": "",
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "sas_device_handle": "",
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "scheduler_mode": "mq-deadline",
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "sectors": 0,
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "sectorsize": "2048",
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "size": 485376.0,
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "support_discard": "2048",
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "type": "disk",
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:             "vendor": "QEMU"
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:         }
Oct 09 10:06:55 compute-2 suspicious_booth[179573]:     }
Oct 09 10:06:55 compute-2 suspicious_booth[179573]: ]
Oct 09 10:06:55 compute-2 systemd[1]: libpod-70d951f2fd6c675105b4138a2fb44bf6ef7f6b5a76e542752c6ffc1ee9441d90.scope: Deactivated successfully.
Oct 09 10:06:55 compute-2 podman[179560]: 2025-10-09 10:06:55.619427703 +0000 UTC m=+0.732770951 container died 70d951f2fd6c675105b4138a2fb44bf6ef7f6b5a76e542752c6ffc1ee9441d90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct 09 10:06:55 compute-2 systemd[1]: var-lib-containers-storage-overlay-f8f15bff331b73cc3d2fd9d0ab00ac29763a020bd2536323b01b3055ff5aaf86-merged.mount: Deactivated successfully.
Oct 09 10:06:55 compute-2 podman[179560]: 2025-10-09 10:06:55.646712671 +0000 UTC m=+0.760055919 container remove 70d951f2fd6c675105b4138a2fb44bf6ef7f6b5a76e542752c6ffc1ee9441d90 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_booth, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct 09 10:06:55 compute-2 systemd[1]: libpod-conmon-70d951f2fd6c675105b4138a2fb44bf6ef7f6b5a76e542752c6ffc1ee9441d90.scope: Deactivated successfully.
Oct 09 10:06:55 compute-2 sudo[179468]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:06:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:06:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:06:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:55 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:06:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:56.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:56 compute-2 ceph-mon[5983]: pgmap v1018: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:06:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:06:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:06:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:06:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:06:56 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:06:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:56.909 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:56 compute-2 nova_compute[163961]: 2025-10-09 10:06:56.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:57 compute-2 ceph-mon[5983]: pgmap v1019: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:06:57 compute-2 nova_compute[163961]: 2025-10-09 10:06:57.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:06:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:06:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:06:58.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:06:58 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/703560270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:06:58 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1093705681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:06:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:06:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:06:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:06:58.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:06:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:59 compute-2 sudo[180761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:06:59 compute-2 sudo[180761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:06:59 compute-2 sudo[180761]: pam_unix(sudo:session): session closed for user root
Oct 09 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.189 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.216 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.217 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.217 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.218 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.218 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:06:59 compute-2 ceph-mon[5983]: pgmap v1020: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:06:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:59 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:06:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:06:59 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3815037012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:06:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.606 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.849 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.851 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4969MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.851 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.851 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.894 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.895 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 10:06:59 compute-2 nova_compute[163961]: 2025-10-09 10:06:59.905 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:06:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:06:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:06:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:06:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:07:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:07:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:06:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:07:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:00 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:07:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:00.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:00 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:07:00 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2014211279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:07:00 compute-2 nova_compute[163961]: 2025-10-09 10:07:00.291 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.386s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:07:00 compute-2 nova_compute[163961]: 2025-10-09 10:07:00.295 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:07:00 compute-2 nova_compute[163961]: 2025-10-09 10:07:00.309 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:07:00 compute-2 nova_compute[163961]: 2025-10-09 10:07:00.310 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 10:07:00 compute-2 nova_compute[163961]: 2025-10-09 10:07:00.311 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.459s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:07:00 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3815037012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:07:00 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2836267413' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:07:00 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1231242342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:07:00 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2014211279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:07:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:00.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:01 compute-2 nova_compute[163961]: 2025-10-09 10:07:01.294 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:07:01 compute-2 nova_compute[163961]: 2025-10-09 10:07:01.295 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 10:07:01 compute-2 nova_compute[163961]: 2025-10-09 10:07:01.295 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 10:07:01 compute-2 nova_compute[163961]: 2025-10-09 10:07:01.348 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 10:07:01 compute-2 nova_compute[163961]: 2025-10-09 10:07:01.349 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:07:01 compute-2 nova_compute[163961]: 2025-10-09 10:07:01.349 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:07:01 compute-2 nova_compute[163961]: 2025-10-09 10:07:01.349 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:07:01 compute-2 nova_compute[163961]: 2025-10-09 10:07:01.349 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:07:01 compute-2 nova_compute[163961]: 2025-10-09 10:07:01.349 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 10:07:01 compute-2 ceph-mon[5983]: pgmap v1021: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #55. Immutable memtables: 0.
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.507494) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 55
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421507619, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2319, "num_deletes": 259, "total_data_size": 5771250, "memory_usage": 5869328, "flush_reason": "Manual Compaction"}
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #56: started
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421521695, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 56, "file_size": 3653044, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28420, "largest_seqno": 30734, "table_properties": {"data_size": 3642593, "index_size": 6305, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3077, "raw_key_size": 26278, "raw_average_key_size": 21, "raw_value_size": 3620093, "raw_average_value_size": 3006, "num_data_blocks": 273, "num_entries": 1204, "num_filter_entries": 1204, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004276, "oldest_key_time": 1760004276, "file_creation_time": 1760004421, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 14217 microseconds, and 11672 cpu microseconds.
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.521736) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #56: 3653044 bytes OK
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.521757) [db/memtable_list.cc:519] [default] Level-0 commit table #56 started
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.522282) [db/memtable_list.cc:722] [default] Level-0 commit table #56: memtable #1 done
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.522297) EVENT_LOG_v1 {"time_micros": 1760004421522292, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.522313) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 5759846, prev total WAL file size 5759846, number of live WAL files 2.
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000052.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.523227) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353034' seq:72057594037927935, type:22 .. '6C6F676D00373539' seq:0, type:0; will stop at (end)
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [56(3567KB)], [54(13MB)]
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421523281, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [56], "files_L6": [54], "score": -1, "input_data_size": 17779883, "oldest_snapshot_seqno": -1}
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #57: 6468 keys, 17623706 bytes, temperature: kUnknown
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421565576, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 57, "file_size": 17623706, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17576986, "index_size": 29458, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 164677, "raw_average_key_size": 25, "raw_value_size": 17456901, "raw_average_value_size": 2698, "num_data_blocks": 1206, "num_entries": 6468, "num_filter_entries": 6468, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760004421, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 57, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.565769) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 17623706 bytes
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.566177) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 419.8 rd, 416.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.5, 13.5 +0.0 blob) out(16.8 +0.0 blob), read-write-amplify(9.7) write-amplify(4.8) OK, records in: 7004, records dropped: 536 output_compression: NoCompression
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.566194) EVENT_LOG_v1 {"time_micros": 1760004421566187, "job": 32, "event": "compaction_finished", "compaction_time_micros": 42349, "compaction_time_cpu_micros": 27111, "output_level": 6, "num_output_files": 1, "total_output_size": 17623706, "num_input_records": 7004, "num_output_records": 6468, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421566697, "job": 32, "event": "table_file_deletion", "file_number": 56}
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000054.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004421568506, "job": 32, "event": "table_file_deletion", "file_number": 54}
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.523165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.568561) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.568563) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.568564) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.568566) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:01 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:01.568567) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:01 compute-2 nova_compute[163961]: 2025-10-09 10:07:01.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:02.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:02 compute-2 ceph-mon[5983]: pgmap v1022: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:07:02 compute-2 nova_compute[163961]: 2025-10-09 10:07:02.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:02.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:03 compute-2 podman[180834]: 2025-10-09 10:07:03.217590763 +0000 UTC m=+0.044201430 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct 09 10:07:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:04.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:04 compute-2 nova_compute[163961]: 2025-10-09 10:07:04.222 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:07:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:04 compute-2 ceph-mon[5983]: pgmap v1023: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:07:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:07:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:04.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:07:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:07:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:07:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:07:05 compute-2 nova_compute[163961]: 2025-10-09 10:07:05.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:07:05 compute-2 nova_compute[163961]: 2025-10-09 10:07:05.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:07:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:06.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:06 compute-2 ceph-mon[5983]: pgmap v1024: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 2 op/s
Oct 09 10:07:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:06.920 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:06 compute-2 nova_compute[163961]: 2025-10-09 10:07:06.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:07 compute-2 nova_compute[163961]: 2025-10-09 10:07:07.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:08.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:08 compute-2 ceph-mon[5983]: pgmap v1025: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:08.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:07:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:07:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:07:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:07:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:10.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:07:10.286 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:07:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:07:10.287 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:07:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:07:10.287 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:07:10 compute-2 ceph-mon[5983]: pgmap v1026: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:10.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:11 compute-2 nova_compute[163961]: 2025-10-09 10:07:11.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:12.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:12 compute-2 nova_compute[163961]: 2025-10-09 10:07:12.168 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:07:12 compute-2 nova_compute[163961]: 2025-10-09 10:07:12.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:12 compute-2 ceph-mon[5983]: pgmap v1027: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1 op/s
Oct 09 10:07:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/3653895063' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:07:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/3653895063' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:07:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:12.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:13 compute-2 sudo[180862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:07:13 compute-2 sudo[180862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:07:13 compute-2 sudo[180862]: pam_unix(sudo:session): session closed for user root
Oct 09 10:07:13 compute-2 podman[180886]: 2025-10-09 10:07:13.877729703 +0000 UTC m=+0.042056616 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct 09 10:07:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:07:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:07:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:07:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:07:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:14.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:14 compute-2 ceph-mon[5983]: pgmap v1028: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:14.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:15 compute-2 podman[180904]: 2025-10-09 10:07:15.212729055 +0000 UTC m=+0.041747414 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct 09 10:07:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:16.145 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:16 compute-2 ceph-mon[5983]: pgmap v1029: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:16.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:16 compute-2 nova_compute[163961]: 2025-10-09 10:07:16.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:17 compute-2 nova_compute[163961]: 2025-10-09 10:07:17.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:18.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:18 compute-2 ceph-mon[5983]: pgmap v1030: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:18.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:07:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:07:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:07:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:07:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:07:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:20.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:20 compute-2 ceph-mon[5983]: pgmap v1031: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:20.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:21 compute-2 nova_compute[163961]: 2025-10-09 10:07:21.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:22.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:22 compute-2 nova_compute[163961]: 2025-10-09 10:07:22.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:22 compute-2 ceph-mon[5983]: pgmap v1032: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:22.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:23 compute-2 podman[180929]: 2025-10-09 10:07:23.248666086 +0000 UTC m=+0.077871786 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 09 10:07:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:07:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:07:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:07:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:07:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:24.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:24 compute-2 ceph-mon[5983]: pgmap v1033: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:24.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:26.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:26 compute-2 ceph-mon[5983]: pgmap v1034: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:26.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:26 compute-2 nova_compute[163961]: 2025-10-09 10:07:26.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:27 compute-2 nova_compute[163961]: 2025-10-09 10:07:27.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:28.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:28 compute-2 ceph-mon[5983]: pgmap v1035: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:28.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:07:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:07:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:07:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:07:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:30.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:30 compute-2 ceph-mon[5983]: pgmap v1036: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:30.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:31 compute-2 nova_compute[163961]: 2025-10-09 10:07:31.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:32.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:32 compute-2 nova_compute[163961]: 2025-10-09 10:07:32.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:32 compute-2 ceph-mon[5983]: pgmap v1037: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:07:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:32.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:07:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:33 compute-2 sudo[180963]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:07:33 compute-2 sudo[180963]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:07:33 compute-2 sudo[180963]: pam_unix(sudo:session): session closed for user root
Oct 09 10:07:33 compute-2 podman[180987]: 2025-10-09 10:07:33.940295152 +0000 UTC m=+0.038427165 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct 09 10:07:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:07:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:07:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:07:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:07:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.002000020s ======
Oct 09 10:07:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:34.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000020s
Oct 09 10:07:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:34 compute-2 ceph-mon[5983]: pgmap v1038: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:07:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:07:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:34.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:07:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:36.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:36 compute-2 ceph-mon[5983]: pgmap v1039: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:36.950 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:36 compute-2 nova_compute[163961]: 2025-10-09 10:07:36.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:37 compute-2 nova_compute[163961]: 2025-10-09 10:07:37.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:38.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:38 compute-2 ceph-mon[5983]: pgmap v1040: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:38.952 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:07:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:07:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:07:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:07:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:40.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:40 compute-2 ceph-mon[5983]: pgmap v1041: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:40.954 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:41 compute-2 nova_compute[163961]: 2025-10-09 10:07:41.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:42.176 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:42 compute-2 nova_compute[163961]: 2025-10-09 10:07:42.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:42 compute-2 ceph-mon[5983]: pgmap v1042: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:42.956 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:07:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:07:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:07:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:07:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:44.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:44 compute-2 podman[181016]: 2025-10-09 10:07:44.228534415 +0000 UTC m=+0.057358673 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct 09 10:07:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:44 compute-2 ceph-mon[5983]: pgmap v1043: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:44.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:46.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:46 compute-2 podman[181035]: 2025-10-09 10:07:46.224446211 +0000 UTC m=+0.059054341 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true)
Oct 09 10:07:46 compute-2 ceph-mon[5983]: pgmap v1044: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:46 compute-2 nova_compute[163961]: 2025-10-09 10:07:46.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:46.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:47 compute-2 nova_compute[163961]: 2025-10-09 10:07:47.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:48.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:48 compute-2 ceph-mon[5983]: pgmap v1045: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:48 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #58. Immutable memtables: 0.
Oct 09 10:07:48 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:48.952588) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 10:07:48 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 58
Oct 09 10:07:48 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004468952632, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 662, "num_deletes": 251, "total_data_size": 1271281, "memory_usage": 1284576, "flush_reason": "Manual Compaction"}
Oct 09 10:07:48 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #59: started
Oct 09 10:07:48 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004468957291, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 59, "file_size": 836219, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30739, "largest_seqno": 31396, "table_properties": {"data_size": 832917, "index_size": 1210, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7395, "raw_average_key_size": 19, "raw_value_size": 826465, "raw_average_value_size": 2124, "num_data_blocks": 55, "num_entries": 389, "num_filter_entries": 389, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004422, "oldest_key_time": 1760004422, "file_creation_time": 1760004468, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:07:48 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 4750 microseconds, and 3738 cpu microseconds.
Oct 09 10:07:48 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:07:48 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:48.957337) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #59: 836219 bytes OK
Oct 09 10:07:48 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:48.957360) [db/memtable_list.cc:519] [default] Level-0 commit table #59 started
Oct 09 10:07:48 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:48.957724) [db/memtable_list.cc:722] [default] Level-0 commit table #59: memtable #1 done
Oct 09 10:07:48 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:48.957736) EVENT_LOG_v1 {"time_micros": 1760004468957733, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 10:07:48 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:48.957753) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 10:07:48 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1267680, prev total WAL file size 1267680, number of live WAL files 2.
Oct 09 10:07:48 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000055.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:07:48 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:48.958234) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct 09 10:07:48 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 10:07:48 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [59(816KB)], [57(16MB)]
Oct 09 10:07:48 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004468958272, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [59], "files_L6": [57], "score": -1, "input_data_size": 18459925, "oldest_snapshot_seqno": -1}
Oct 09 10:07:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:48.962 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:07:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:07:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:07:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:07:49 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #60: 6346 keys, 16354784 bytes, temperature: kUnknown
Oct 09 10:07:49 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004469006046, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 60, "file_size": 16354784, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 16309863, "index_size": 27979, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15877, "raw_key_size": 162794, "raw_average_key_size": 25, "raw_value_size": 16192899, "raw_average_value_size": 2551, "num_data_blocks": 1141, "num_entries": 6346, "num_filter_entries": 6346, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760004468, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 60, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:07:49 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:07:49 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:49.006339) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 16354784 bytes
Oct 09 10:07:49 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:49.006919) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 385.6 rd, 341.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 16.8 +0.0 blob) out(15.6 +0.0 blob), read-write-amplify(41.6) write-amplify(19.6) OK, records in: 6857, records dropped: 511 output_compression: NoCompression
Oct 09 10:07:49 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:49.006936) EVENT_LOG_v1 {"time_micros": 1760004469006927, "job": 34, "event": "compaction_finished", "compaction_time_micros": 47878, "compaction_time_cpu_micros": 35803, "output_level": 6, "num_output_files": 1, "total_output_size": 16354784, "num_input_records": 6857, "num_output_records": 6346, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 10:07:49 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:07:49 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004469007317, "job": 34, "event": "table_file_deletion", "file_number": 59}
Oct 09 10:07:49 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000057.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:07:49 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004469010292, "job": 34, "event": "table_file_deletion", "file_number": 57}
Oct 09 10:07:49 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:48.958186) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:49 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:49.010353) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:49 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:49.010356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:49 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:49.010358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:49 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:49.010359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:49 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:07:49.010361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:07:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:07:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:50.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:50 compute-2 ceph-mon[5983]: pgmap v1046: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:07:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:50.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:07:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:51 compute-2 nova_compute[163961]: 2025-10-09 10:07:51.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:52.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:52 compute-2 nova_compute[163961]: 2025-10-09 10:07:52.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:52.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:52 compute-2 ceph-mon[5983]: pgmap v1047: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:53 compute-2 sudo[181060]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:07:53 compute-2 sudo[181060]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:07:53 compute-2 sudo[181060]: pam_unix(sudo:session): session closed for user root
Oct 09 10:07:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:07:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:07:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:07:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:07:54 compute-2 podman[181084]: 2025-10-09 10:07:54.045378033 +0000 UTC m=+0.072465963 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Oct 09 10:07:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:54.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:54.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:54 compute-2 ceph-mon[5983]: pgmap v1048: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:07:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:56.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:07:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:56 compute-2 nova_compute[163961]: 2025-10-09 10:07:56.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:07:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:56.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:07:56 compute-2 ceph-mon[5983]: pgmap v1049: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:07:57 compute-2 nova_compute[163961]: 2025-10-09 10:07:57.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:07:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:07:58.196 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:07:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:07:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:07:58.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:07:58 compute-2 ceph-mon[5983]: pgmap v1050: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:07:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:07:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:07:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:07:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:07:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:07:59 compute-2 sudo[181113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:07:59 compute-2 sudo[181113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:07:59 compute-2 sudo[181113]: pam_unix(sudo:session): session closed for user root
Oct 09 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.174 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.195 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.196 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.196 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.197 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.197 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:07:59 compute-2 sudo[181138]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 10:07:59 compute-2 sudo[181138]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:07:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:07:59 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/700554729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.568 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.371s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:07:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:07:59 compute-2 sudo[181138]: pam_unix(sudo:session): session closed for user root
Oct 09 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.837 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.838 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4943MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.839 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.839 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.890 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.890 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 10:07:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:07:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:07:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:07:59 compute-2 nova_compute[163961]: 2025-10-09 10:07:59.962 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:08:00 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/273661975' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:08:00 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/700554729' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:08:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:08:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:08:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:08:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:08:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:08:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:08:00 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:08:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:00.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:00 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:08:00 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1534538487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:08:00 compute-2 nova_compute[163961]: 2025-10-09 10:08:00.324 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.362s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:08:00 compute-2 nova_compute[163961]: 2025-10-09 10:08:00.329 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:08:00 compute-2 nova_compute[163961]: 2025-10-09 10:08:00.339 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:08:00 compute-2 nova_compute[163961]: 2025-10-09 10:08:00.341 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 10:08:00 compute-2 nova_compute[163961]: 2025-10-09 10:08:00.341 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:08:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:00.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:01 compute-2 ceph-mon[5983]: pgmap v1051: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 10:08:01 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1270285005' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:08:01 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1534538487' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:08:01 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2268930699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:08:01 compute-2 nova_compute[163961]: 2025-10-09 10:08:01.339 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:08:01 compute-2 nova_compute[163961]: 2025-10-09 10:08:01.340 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:08:01 compute-2 nova_compute[163961]: 2025-10-09 10:08:01.340 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:08:01 compute-2 nova_compute[163961]: 2025-10-09 10:08:01.340 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 10:08:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:01 compute-2 nova_compute[163961]: 2025-10-09 10:08:01.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:02 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/33496731' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:08:02 compute-2 nova_compute[163961]: 2025-10-09 10:08:02.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:08:02 compute-2 nova_compute[163961]: 2025-10-09 10:08:02.173 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 10:08:02 compute-2 nova_compute[163961]: 2025-10-09 10:08:02.173 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 10:08:02 compute-2 nova_compute[163961]: 2025-10-09 10:08:02.184 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 10:08:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:02.200 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:02 compute-2 nova_compute[163961]: 2025-10-09 10:08:02.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:02.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:03 compute-2 ceph-mon[5983]: pgmap v1052: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:08:03 compute-2 nova_compute[163961]: 2025-10-09 10:08:03.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:08:03 compute-2 sudo[181241]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:08:03 compute-2 sudo[181241]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:08:03 compute-2 sudo[181241]: pam_unix(sudo:session): session closed for user root
Oct 09 10:08:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:08:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:08:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:08:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:08:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:04.202 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:04 compute-2 podman[181266]: 2025-10-09 10:08:04.244170273 +0000 UTC m=+0.070929899 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct 09 10:08:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:08:04 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:08:04 compute-2 ceph-mon[5983]: pgmap v1053: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 10:08:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:04.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:05 compute-2 nova_compute[163961]: 2025-10-09 10:08:05.168 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:08:05 compute-2 nova_compute[163961]: 2025-10-09 10:08:05.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:08:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:08:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:06.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:06 compute-2 ceph-mon[5983]: pgmap v1054: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:08:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:06 compute-2 nova_compute[163961]: 2025-10-09 10:08:06.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:06.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:07 compute-2 nova_compute[163961]: 2025-10-09 10:08:07.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:08:07 compute-2 nova_compute[163961]: 2025-10-09 10:08:07.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:08.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:08 compute-2 ceph-mon[5983]: pgmap v1055: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 10:08:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:08.981 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:08:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:08:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:08:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:08:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:10.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:08:10.286 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:08:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:08:10.286 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:08:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:08:10.286 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:08:10 compute-2 ceph-mon[5983]: pgmap v1056: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.0 KiB/s rd, 1 op/s
Oct 09 10:08:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:10.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:11 compute-2 nova_compute[163961]: 2025-10-09 10:08:11.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:12.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:12 compute-2 nova_compute[163961]: 2025-10-09 10:08:12.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:12 compute-2 ceph-mon[5983]: pgmap v1057: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:08:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/1775733170' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:08:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/1775733170' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:08:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:12.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:08:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:08:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:08:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:08:14 compute-2 sudo[181297]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:08:14 compute-2 sudo[181297]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:08:14 compute-2 sudo[181297]: pam_unix(sudo:session): session closed for user root
Oct 09 10:08:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:14.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:14 compute-2 ceph-mon[5983]: pgmap v1058: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:14.987 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:15 compute-2 podman[181323]: 2025-10-09 10:08:15.215742648 +0000 UTC m=+0.044857175 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 09 10:08:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:16.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:16 compute-2 ceph-mon[5983]: pgmap v1059: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:08:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:16 compute-2 nova_compute[163961]: 2025-10-09 10:08:16.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:16.989 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:17 compute-2 podman[181341]: 2025-10-09 10:08:17.218460746 +0000 UTC m=+0.048203735 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct 09 10:08:17 compute-2 nova_compute[163961]: 2025-10-09 10:08:17.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:18.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:18 compute-2 ceph-mon[5983]: pgmap v1060: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:18.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:08:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:08:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:08:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:08:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:19 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:08:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:20.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:20 compute-2 ceph-mon[5983]: pgmap v1061: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:20.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:21 compute-2 nova_compute[163961]: 2025-10-09 10:08:21.736 2 DEBUG oslo_concurrency.processutils [None req-06752881-e4c7-4336-b1c1-bcd187f39813 3a4ac457589b496085910d92d06034e7 a53d5690b6a54109990182326650a2b8 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:08:21 compute-2 nova_compute[163961]: 2025-10-09 10:08:21.770 2 DEBUG oslo_concurrency.processutils [None req-06752881-e4c7-4336-b1c1-bcd187f39813 3a4ac457589b496085910d92d06034e7 a53d5690b6a54109990182326650a2b8 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:08:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:21 compute-2 nova_compute[163961]: 2025-10-09 10:08:21.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:22.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:22 compute-2 nova_compute[163961]: 2025-10-09 10:08:22.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:22 compute-2 ceph-mon[5983]: pgmap v1062: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:08:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:22.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:08:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:08:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:08:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:08:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:24.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:24 compute-2 podman[181366]: 2025-10-09 10:08:24.231612449 +0000 UTC m=+0.062669927 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:08:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:24 compute-2 ceph-mon[5983]: pgmap v1063: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:24.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:25 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:08:25.714 71793 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '86:53:6e', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '26:2f:47:35:f4:09'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Oct 09 10:08:25 compute-2 nova_compute[163961]: 2025-10-09 10:08:25.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:25 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:08:25.715 71793 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Oct 09 10:08:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:26.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:26 compute-2 ceph-mon[5983]: pgmap v1064: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:08:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:26 compute-2 nova_compute[163961]: 2025-10-09 10:08:26.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:08:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:26.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:08:27 compute-2 nova_compute[163961]: 2025-10-09 10:08:27.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:28.229 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:28 compute-2 ceph-mon[5983]: pgmap v1065: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:08:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:08:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:08:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:08:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:29.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:30.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:30 compute-2 ceph-mon[5983]: pgmap v1066: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:31.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:31 compute-2 nova_compute[163961]: 2025-10-09 10:08:31.975 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:32.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:32 compute-2 nova_compute[163961]: 2025-10-09 10:08:32.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:32 compute-2 ceph-mon[5983]: pgmap v1067: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:08:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:33.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:33 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:08:33.718 71793 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=c24becb7-a313-4586-a73e-1530a4367da3, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Oct 09 10:08:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:08:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:08:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:08:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:08:34 compute-2 sudo[181400]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:08:34 compute-2 sudo[181400]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:08:34 compute-2 sudo[181400]: pam_unix(sudo:session): session closed for user root
Oct 09 10:08:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:34.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:34 compute-2 ceph-mon[5983]: pgmap v1068: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:34 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:08:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:35.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:35 compute-2 podman[181426]: 2025-10-09 10:08:35.250645769 +0000 UTC m=+0.077976823 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:08:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:36.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:36 compute-2 ceph-mon[5983]: pgmap v1069: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:08:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:36 compute-2 nova_compute[163961]: 2025-10-09 10:08:36.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:37.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:37 compute-2 nova_compute[163961]: 2025-10-09 10:08:37.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:38.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:38 compute-2 ceph-mon[5983]: pgmap v1070: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:08:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:08:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:08:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:08:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:08:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:39.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:40.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:40 compute-2 ceph-mon[5983]: pgmap v1071: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:08:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:41.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:41 compute-2 nova_compute[163961]: 2025-10-09 10:08:41.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:42.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:42 compute-2 nova_compute[163961]: 2025-10-09 10:08:42.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:42 compute-2 ceph-mon[5983]: pgmap v1072: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:08:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:43.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:08:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:08:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:08:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:08:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:44.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:44 compute-2 ceph-mon[5983]: pgmap v1073: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:08:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:45.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:46 compute-2 podman[181454]: 2025-10-09 10:08:46.222918295 +0000 UTC m=+0.050152739 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct 09 10:08:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:46.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:46 compute-2 ceph-mon[5983]: pgmap v1074: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:08:46 compute-2 nova_compute[163961]: 2025-10-09 10:08:46.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:47.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:47 compute-2 nova_compute[163961]: 2025-10-09 10:08:47.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:48 compute-2 podman[181472]: 2025-10-09 10:08:48.22134982 +0000 UTC m=+0.051577365 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd)
Oct 09 10:08:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:48.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:48 compute-2 ceph-mon[5983]: pgmap v1075: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:08:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:08:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:08:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:08:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:08:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:49.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:08:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:49 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:08:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:50.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:50 compute-2 ceph-mon[5983]: pgmap v1076: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:51.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:51 compute-2 nova_compute[163961]: 2025-10-09 10:08:51.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:52.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:52 compute-2 nova_compute[163961]: 2025-10-09 10:08:52.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:53 compute-2 ceph-mon[5983]: pgmap v1077: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:08:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:53.027 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:08:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:08:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:08:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:08:54 compute-2 sudo[181495]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:08:54 compute-2 sudo[181495]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:08:54 compute-2 sudo[181495]: pam_unix(sudo:session): session closed for user root
Oct 09 10:08:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:08:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:54.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:08:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:55 compute-2 ceph-mon[5983]: pgmap v1078: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:55.029 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:55 compute-2 podman[181521]: 2025-10-09 10:08:55.228102504 +0000 UTC m=+0.059947635 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct 09 10:08:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:56.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:56 compute-2 nova_compute[163961]: 2025-10-09 10:08:56.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:57 compute-2 ceph-mon[5983]: pgmap v1079: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:08:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:57.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:57 compute-2 nova_compute[163961]: 2025-10-09 10:08:57.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:08:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:08:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:08:58.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:08:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:08:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:08:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:08:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:08:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:08:59 compute-2 ceph-mon[5983]: pgmap v1080: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:08:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:08:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:08:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:08:59.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:08:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:08:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:08:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:08:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:08:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.174 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.193 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.194 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.194 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.194 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.195 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:09:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:00.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:00 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:09:00 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1526071422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.570 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.375s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.803 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.805 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4959MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.805 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.806 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.858 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.858 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 10:09:00 compute-2 nova_compute[163961]: 2025-10-09 10:09:00.870 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:09:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:01.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:01 compute-2 ceph-mon[5983]: pgmap v1081: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:01 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2013677285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:09:01 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1526071422' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:09:01 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:09:01 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1405609489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:09:01 compute-2 nova_compute[163961]: 2025-10-09 10:09:01.245 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.375s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:09:01 compute-2 nova_compute[163961]: 2025-10-09 10:09:01.251 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:09:01 compute-2 nova_compute[163961]: 2025-10-09 10:09:01.261 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:09:01 compute-2 nova_compute[163961]: 2025-10-09 10:09:01.262 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 10:09:01 compute-2 nova_compute[163961]: 2025-10-09 10:09:01.262 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.456s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:09:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:01 compute-2 nova_compute[163961]: 2025-10-09 10:09:01.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:02 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3812894481' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:09:02 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1405609489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:09:02 compute-2 nova_compute[163961]: 2025-10-09 10:09:02.260 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:09:02 compute-2 nova_compute[163961]: 2025-10-09 10:09:02.260 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:09:02 compute-2 nova_compute[163961]: 2025-10-09 10:09:02.260 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 10:09:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:02.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:02 compute-2 nova_compute[163961]: 2025-10-09 10:09:02.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:03.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:03 compute-2 ceph-mon[5983]: pgmap v1082: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:09:03 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2740802142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:09:03 compute-2 nova_compute[163961]: 2025-10-09 10:09:03.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:09:03 compute-2 nova_compute[163961]: 2025-10-09 10:09:03.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 10:09:03 compute-2 nova_compute[163961]: 2025-10-09 10:09:03.172 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 10:09:03 compute-2 nova_compute[163961]: 2025-10-09 10:09:03.181 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 10:09:03 compute-2 nova_compute[163961]: 2025-10-09 10:09:03.181 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:09:03 compute-2 nova_compute[163961]: 2025-10-09 10:09:03.181 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:09:03 compute-2 sudo[181597]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:09:03 compute-2 sudo[181597]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:09:03 compute-2 sudo[181597]: pam_unix(sudo:session): session closed for user root
Oct 09 10:09:03 compute-2 sudo[181622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 10:09:03 compute-2 sudo[181622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:09:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:09:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:09:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:09:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:09:04 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2969860949' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:09:04 compute-2 sudo[181622]: pam_unix(sudo:session): session closed for user root
Oct 09 10:09:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:04.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:05.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:05 compute-2 ceph-mon[5983]: pgmap v1083: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Oct 09 10:09:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Oct 09 10:09:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct 09 10:09:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:09:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:09:05 compute-2 ceph-mon[5983]: pgmap v1084: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:09:05 compute-2 ceph-mon[5983]: pgmap v1085: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 718 B/s rd, 0 op/s
Oct 09 10:09:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:09:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:09:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:09:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:09:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:09:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:09:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:06 compute-2 nova_compute[163961]: 2025-10-09 10:09:06.176 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:09:06 compute-2 podman[181678]: 2025-10-09 10:09:06.219591465 +0000 UTC m=+0.047638360 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 09 10:09:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:06.279 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:06 compute-2 nova_compute[163961]: 2025-10-09 10:09:06.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:09:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:07.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:09:07 compute-2 nova_compute[163961]: 2025-10-09 10:09:07.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:09:07 compute-2 ceph-mon[5983]: pgmap v1086: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 09 10:09:07 compute-2 nova_compute[163961]: 2025-10-09 10:09:07.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:08.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:08 compute-2 sudo[181698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:09:08 compute-2 sudo[181698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:09:08 compute-2 sudo[181698]: pam_unix(sudo:session): session closed for user root
Oct 09 10:09:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:09:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:09:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:08 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:09:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:09:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:09:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:09.043 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:09:09 compute-2 nova_compute[163961]: 2025-10-09 10:09:09.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:09:09 compute-2 ceph-mon[5983]: pgmap v1087: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 09 10:09:09 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:09:09 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:09:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:10.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:09:10.287 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:09:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:09:10.287 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:09:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:09:10.287 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:09:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:11.045 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:11 compute-2 ceph-mon[5983]: pgmap v1088: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 718 B/s rd, 0 op/s
Oct 09 10:09:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:11 compute-2 nova_compute[163961]: 2025-10-09 10:09:11.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:12 compute-2 nova_compute[163961]: 2025-10-09 10:09:12.168 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:09:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:09:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:12.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:09:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/3764022889' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:09:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/3764022889' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:09:12 compute-2 nova_compute[163961]: 2025-10-09 10:09:12.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:13.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:13 compute-2 ceph-mon[5983]: pgmap v1089: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 1 op/s
Oct 09 10:09:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:09:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:09:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:09:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:14 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:09:14 compute-2 sudo[181729]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:09:14 compute-2 sudo[181729]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:09:14 compute-2 sudo[181729]: pam_unix(sudo:session): session closed for user root
Oct 09 10:09:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:14.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:14 compute-2 ceph-mon[5983]: pgmap v1090: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1 op/s
Oct 09 10:09:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:15.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:16.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:16 compute-2 nova_compute[163961]: 2025-10-09 10:09:16.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:17.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:17 compute-2 podman[181756]: 2025-10-09 10:09:17.210880482 +0000 UTC m=+0.042837128 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent)
Oct 09 10:09:17 compute-2 ceph-mon[5983]: pgmap v1091: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:09:17 compute-2 nova_compute[163961]: 2025-10-09 10:09:17.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:18.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:09:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:09:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:18 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:09:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:19 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:09:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:19.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:19 compute-2 podman[181776]: 2025-10-09 10:09:19.217661504 +0000 UTC m=+0.049354336 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct 09 10:09:19 compute-2 ceph-mon[5983]: pgmap v1092: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:20.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:09:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:21.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:21 compute-2 ceph-mon[5983]: pgmap v1093: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:22 compute-2 nova_compute[163961]: 2025-10-09 10:09:22.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:22.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:22 compute-2 nova_compute[163961]: 2025-10-09 10:09:22.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:23.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:23 compute-2 ceph-mon[5983]: pgmap v1094: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:09:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:09:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:23 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:09:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:09:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:24 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:09:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:24.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:25.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:25 compute-2 ceph-mon[5983]: pgmap v1095: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:26 compute-2 podman[181800]: 2025-10-09 10:09:26.23137145 +0000 UTC m=+0.062357709 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct 09 10:09:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:26.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:27 compute-2 nova_compute[163961]: 2025-10-09 10:09:27.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:27.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:27 compute-2 ceph-mon[5983]: pgmap v1096: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:09:27 compute-2 nova_compute[163961]: 2025-10-09 10:09:27.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:28.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:09:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:09:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:28 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:09:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:29 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:09:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:29.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:29 compute-2 ceph-mon[5983]: pgmap v1097: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:30.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:31.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:31 compute-2 ceph-mon[5983]: pgmap v1098: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:32 compute-2 nova_compute[163961]: 2025-10-09 10:09:32.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:09:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:32.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:09:32 compute-2 nova_compute[163961]: 2025-10-09 10:09:32.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:09:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:33.067 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:09:33 compute-2 ceph-mon[5983]: pgmap v1099: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:09:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:09:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:09:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:33 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:09:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:34 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:09:34 compute-2 sudo[181832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:09:34 compute-2 sudo[181832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:09:34 compute-2 sudo[181832]: pam_unix(sudo:session): session closed for user root
Oct 09 10:09:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:34.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:34 compute-2 ceph-mon[5983]: pgmap v1100: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:35.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:09:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:36.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:36 compute-2 ceph-mon[5983]: pgmap v1101: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:09:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:37 compute-2 nova_compute[163961]: 2025-10-09 10:09:37.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:37.071 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:37 compute-2 podman[181859]: 2025-10-09 10:09:37.218447914 +0000 UTC m=+0.046788489 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct 09 10:09:37 compute-2 nova_compute[163961]: 2025-10-09 10:09:37.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:38.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:09:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:09:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:38 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:09:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:39 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:09:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:39.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:39 compute-2 ceph-mon[5983]: pgmap v1102: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:40.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:41.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:41 compute-2 ceph-mon[5983]: pgmap v1103: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:42 compute-2 nova_compute[163961]: 2025-10-09 10:09:42.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:42.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:42 compute-2 nova_compute[163961]: 2025-10-09 10:09:42.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:43.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:43 compute-2 ceph-mon[5983]: pgmap v1104: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:09:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:09:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:09:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:43 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:09:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:44 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:09:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:44.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:45.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:45 compute-2 ceph-mon[5983]: pgmap v1105: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:46.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:47 compute-2 nova_compute[163961]: 2025-10-09 10:09:47.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:47.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:47 compute-2 ceph-mon[5983]: pgmap v1106: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:09:47 compute-2 nova_compute[163961]: 2025-10-09 10:09:47.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:48 compute-2 podman[181888]: 2025-10-09 10:09:48.216995723 +0000 UTC m=+0.045402645 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent)
Oct 09 10:09:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:48.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:09:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:09:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:48 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:09:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:49 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:09:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:49.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:49 compute-2 ceph-mon[5983]: pgmap v1107: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:50 compute-2 podman[181906]: 2025-10-09 10:09:50.216588283 +0000 UTC m=+0.049730594 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct 09 10:09:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:50.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:09:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:09:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:51.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:09:51 compute-2 ceph-mon[5983]: pgmap v1108: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:52 compute-2 nova_compute[163961]: 2025-10-09 10:09:52.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:52.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:52 compute-2 nova_compute[163961]: 2025-10-09 10:09:52.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:53.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:53 compute-2 ceph-mon[5983]: pgmap v1109: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:09:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:09:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:09:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:53 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:09:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:54 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:09:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:09:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:54.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:09:54 compute-2 sudo[181928]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:09:54 compute-2 sudo[181928]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:09:54 compute-2 sudo[181928]: pam_unix(sudo:session): session closed for user root
Oct 09 10:09:54 compute-2 ceph-mon[5983]: pgmap v1110: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:55.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:56.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:57 compute-2 nova_compute[163961]: 2025-10-09 10:09:57.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:57.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:57 compute-2 podman[181955]: 2025-10-09 10:09:57.23203865 +0000 UTC m=+0.065297371 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 09 10:09:57 compute-2 ceph-mon[5983]: pgmap v1111: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:09:57 compute-2 nova_compute[163961]: 2025-10-09 10:09:57.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:09:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:09:58.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:09:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:09:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:58 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:09:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:09:59 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:09:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:09:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:09:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:09:59.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:09:59 compute-2 ceph-mon[5983]: pgmap v1112: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:09:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:09:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:09:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:09:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:09:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:00.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:00 compute-2 ceph-mon[5983]: overall HEALTH_WARN 1 failed cephadm daemon(s)
Oct 09 10:10:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:01 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:01 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:01 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:01.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.171 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.189 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.189 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.190 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.190 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Auditing locally available compute resources for compute-2.ctlplane.example.com (node: compute-2.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Oct 09 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.190 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:10:01 compute-2 ceph-mon[5983]: pgmap v1113: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:01 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2459378375' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:10:01 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2459720255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:10:01 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:10:01 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1012934223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.553 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.792 2 WARNING nova.virt.libvirt.driver [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Oct 09 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.794 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Hypervisor/Node resource view: name=compute-2.ctlplane.example.com free_ram=4971MB free_disk=59.988277435302734GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_7", "address": "0000:00:02.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_05_00_0", "address": "0000:05:00.0", "product_id": "1045", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1045", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_6", "address": "0000:00:03.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_0", "address": "0000:00:1f.0", "product_id": "2918", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2918", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_03_00_0", "address": "0000:03:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_06_00_0", "address": "0000:06:00.0", "product_id": "1044", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1044", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_6", "address": "0000:00:02.6", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_1", "address": "0000:00:03.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_4", "address": "0000:00:03.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_2", "address": "0000:00:03.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_2", "address": "0000:00:02.2", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_4", "address": "0000:00:02.4", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_04_00_0", "address": "0000:04:00.0", "product_id": "1042", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1042", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_3", "address": "0000:00:1f.3", "product_id": "2930", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2930", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_3", "address": "0000:00:02.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_5", "address": "0000:00:03.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "29c0", "vendor_id": "8086", "numa_node": null, "label": "label_8086_29c0", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_1f_2", "address": "0000:00:1f.2", "product_id": "2922", "vendor_id": "8086", "numa_node": null, "label": "label_8086_2922", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_01_00_0", "address": "0000:01:00.0", "product_id": "000e", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000e", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_7", "address": "0000:00:03.7", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_07_00_0", "address": "0000:07:00.0", "product_id": "1041", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1041", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_5", "address": "0000:00:02.5", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_1", "address": "0000:00:02.1", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_02_01_0", "address": "0000:02:01.0", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_3", "address": "0000:00:03.3", "product_id": "000c", "vendor_id": "1b36", "numa_node": null, "label": "label_1b36_000c", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Oct 09 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.795 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.795 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.845 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Total usable vcpus: 4, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Oct 09 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.845 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Final resource view: name=compute-2.ctlplane.example.com phys_ram=7680MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=4 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Oct 09 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.858 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing inventories for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Oct 09 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.871 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating ProviderTree inventory for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 from _refresh_and_get_inventory using data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Oct 09 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.871 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Updating inventory in ProviderTree for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 with inventory: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Oct 09 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.884 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing aggregate associations for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Oct 09 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.898 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Refreshing trait associations for resource provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8, traits: HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX,HW_CPU_X86_FMA3,HW_CPU_X86_SSE42,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_AVX512VPCLMULQDQ,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_AVX,HW_CPU_X86_CLMUL,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSSE3,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,HW_CPU_X86_ABM,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_DEVICE_TAGGING,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_STORAGE_BUS_SATA,COMPUTE_VIOMMU_MODEL_AUTO,HW_CPU_X86_SSE2,HW_CPU_X86_BMI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_AVX2,HW_CPU_X86_AVX512VAES,COMPUTE_TRUSTED_CERTS,COMPUTE_RESCUE_BFV,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_BMI2,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Oct 09 10:10:01 compute-2 nova_compute[163961]: 2025-10-09 10:10:01.911 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Oct 09 10:10:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:01 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:01 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:02 compute-2 nova_compute[163961]: 2025-10-09 10:10:02.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:02 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Oct 09 10:10:02 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1023247151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:10:02 compute-2 nova_compute[163961]: 2025-10-09 10:10:02.274 2 DEBUG oslo_concurrency.processutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Oct 09 10:10:02 compute-2 nova_compute[163961]: 2025-10-09 10:10:02.279 2 DEBUG nova.compute.provider_tree [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed in ProviderTree for provider: 41a86af9-054a-49c9-9d2e-f0396c1c31a8 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Oct 09 10:10:02 compute-2 nova_compute[163961]: 2025-10-09 10:10:02.291 2 DEBUG nova.scheduler.client.report [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Inventory has not changed for provider 41a86af9-054a-49c9-9d2e-f0396c1c31a8 based on inventory data: {'VCPU': {'total': 4, 'reserved': 0, 'min_unit': 1, 'max_unit': 4, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7680, 'reserved': 512, 'min_unit': 1, 'max_unit': 7680, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Oct 09 10:10:02 compute-2 nova_compute[163961]: 2025-10-09 10:10:02.292 2 DEBUG nova.compute.resource_tracker [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Compute_service record updated for compute-2.ctlplane.example.com:compute-2.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Oct 09 10:10:02 compute-2 nova_compute[163961]: 2025-10-09 10:10:02.292 2 DEBUG oslo_concurrency.lockutils [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:10:02 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:02 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:02 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:02.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:02 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1012934223' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:10:02 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1023247151' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:10:02 compute-2 nova_compute[163961]: 2025-10-09 10:10:02.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:02 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:02 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:03 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:03 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:03 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:03.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:03 compute-2 nova_compute[163961]: 2025-10-09 10:10:03.293 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:10:03 compute-2 nova_compute[163961]: 2025-10-09 10:10:03.293 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Oct 09 10:10:03 compute-2 nova_compute[163961]: 2025-10-09 10:10:03.294 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Oct 09 10:10:03 compute-2 nova_compute[163961]: 2025-10-09 10:10:03.304 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Oct 09 10:10:03 compute-2 nova_compute[163961]: 2025-10-09 10:10:03.304 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:10:03 compute-2 nova_compute[163961]: 2025-10-09 10:10:03.305 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:10:03 compute-2 nova_compute[163961]: 2025-10-09 10:10:03.305 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:10:03 compute-2 nova_compute[163961]: 2025-10-09 10:10:03.305 2 DEBUG nova.compute.manager [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Oct 09 10:10:03 compute-2 ceph-mon[5983]: pgmap v1114: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:10:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:03 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:03 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:10:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:10:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:03 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:10:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:04 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:10:04 compute-2 nova_compute[163961]: 2025-10-09 10:10:04.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:10:04 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:04 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:04 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:04.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:04 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:04 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:04 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:05 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:05 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:05 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:05.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:05 compute-2 ceph-mon[5983]: pgmap v1115: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:05 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:10:05 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1732046277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:10:05 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/4139326846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct 09 10:10:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:05 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:05 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:06 compute-2 nova_compute[163961]: 2025-10-09 10:10:06.167 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:10:06 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:06 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:06 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:06.360 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:06 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:06 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:07 compute-2 nova_compute[163961]: 2025-10-09 10:10:07.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:07 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:07 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000011s ======
Oct 09 10:10:07 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:07.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000011s
Oct 09 10:10:07 compute-2 ceph-mon[5983]: pgmap v1116: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:10:07 compute-2 nova_compute[163961]: 2025-10-09 10:10:07.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:07 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:07 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:08 compute-2 systemd[1]: Starting system activity accounting tool...
Oct 09 10:10:08 compute-2 systemd[1]: sysstat-collect.service: Deactivated successfully.
Oct 09 10:10:08 compute-2 systemd[1]: Finished system activity accounting tool.
Oct 09 10:10:08 compute-2 podman[182033]: 2025-10-09 10:10:08.221188073 +0000 UTC m=+0.049933146 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=iscsid)
Oct 09 10:10:08 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:08 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:08 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:08.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:08 compute-2 sudo[182052]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:10:08 compute-2 sudo[182052]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:10:08 compute-2 sudo[182052]: pam_unix(sudo:session): session closed for user root
Oct 09 10:10:08 compute-2 sudo[182077]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --image quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec --timeout 895 ls
Oct 09 10:10:08 compute-2 sudo[182077]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:10:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:08 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:08 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:10:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:10:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:10:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:09 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:10:09 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:09 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.002000020s ======
Oct 09 10:10:09 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:09.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000020s
Oct 09 10:10:09 compute-2 podman[182158]: 2025-10-09 10:10:09.139579354 +0000 UTC m=+0.047025715 container exec 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 10:10:09 compute-2 nova_compute[163961]: 2025-10-09 10:10:09.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:10:09 compute-2 nova_compute[163961]: 2025-10-09 10:10:09.172 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:10:09 compute-2 podman[182175]: 2025-10-09 10:10:09.285961257 +0000 UTC m=+0.049960968 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct 09 10:10:09 compute-2 podman[182158]: 2025-10-09 10:10:09.290347927 +0000 UTC m=+0.197794288 container exec_died 3269fa105124b346bbade9a076525e4f1ee13d5e25e4d5f325f5826a78acd2d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-mon-compute-2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct 09 10:10:09 compute-2 ceph-mon[5983]: pgmap v1117: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:09 compute-2 podman[182237]: 2025-10-09 10:10:09.587861076 +0000 UTC m=+0.047066803 container exec 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 10:10:09 compute-2 podman[182237]: 2025-10-09 10:10:09.597060719 +0000 UTC m=+0.056266427 container exec_died 5410eaef7d1aac98147379962cc6915521ee32fe96bca636e143a51785761648 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-node-exporter-compute-2, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Oct 09 10:10:09 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:09 compute-2 podman[182337]: 2025-10-09 10:10:09.944570163 +0000 UTC m=+0.039336229 container exec 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct 09 10:10:09 compute-2 podman[182337]: 2025-10-09 10:10:09.955030412 +0000 UTC m=+0.049796479 container exec_died 78d2089ade6f8d172e0c13a56cfd08574c65052231e004f2c8976af855ada69c (image=quay.io/ceph/haproxy:2.3, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-haproxy-rgw-default-compute-2-gkeojf)
Oct 09 10:10:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:09 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:09 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:10 compute-2 podman[182390]: 2025-10-09 10:10:10.122957903 +0000 UTC m=+0.040094930 container exec a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, version=2.2.4, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, build-date=2023-02-22T09:23:20, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, vcs-type=git, io.buildah.version=1.28.2, release=1793)
Oct 09 10:10:10 compute-2 podman[182390]: 2025-10-09 10:10:10.135249356 +0000 UTC m=+0.052386392 container exec_died a04fcbca9df22504c28fab57ec3b63f36c4dca33f462c0f567edf3bc0627f37b (image=quay.io/ceph/keepalived:2.2.4, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw, version=2.2.4, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, release=1793, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, build-date=2023-02-22T09:23:20)
Oct 09 10:10:10 compute-2 podman[182433]: 2025-10-09 10:10:10.268337536 +0000 UTC m=+0.043357840 container exec 497c7afc8fec44ce46000a7251f8bab138912e15672ce0c2da150a022a264c99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct 09 10:10:10 compute-2 podman[182433]: 2025-10-09 10:10:10.285113412 +0000 UTC m=+0.060133707 container exec_died 497c7afc8fec44ce46000a7251f8bab138912e15672ce0c2da150a022a264c99 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Oct 09 10:10:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:10:10.288 71793 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Oct 09 10:10:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:10:10.289 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Oct 09 10:10:10 compute-2 ovn_metadata_agent[71788]: 2025-10-09 10:10:10.289 71793 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Oct 09 10:10:10 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:10 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:10 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:10.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:10 compute-2 sudo[182077]: pam_unix(sudo:session): session closed for user root
Oct 09 10:10:10 compute-2 sudo[182487]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Oct 09 10:10:10 compute-2 sudo[182487]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:10:10 compute-2 sudo[182487]: pam_unix(sudo:session): session closed for user root
Oct 09 10:10:10 compute-2 sudo[182512]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/286f8bf0-da72-5823-9a4e-ac4457d9e609/cephadm.1a8853661a9c1798390b8e8d13c27688c1b1327a075745af2ee40ac466f0ac36 --timeout 895 gather-facts
Oct 09 10:10:10 compute-2 sudo[182512]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:10:10 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:10:10 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:10:10 compute-2 ceph-mon[5983]: pgmap v1118: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:10 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:10:10 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:10:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:10 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:10 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:11 compute-2 sudo[182512]: pam_unix(sudo:session): session closed for user root
Oct 09 10:10:11 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:11 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:11 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:11.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #61. Immutable memtables: 0.
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.590374) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 61
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611590438, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 1953, "num_deletes": 504, "total_data_size": 4267432, "memory_usage": 4338096, "flush_reason": "Manual Compaction"}
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #62: started
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611599765, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 62, "file_size": 2787071, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31401, "largest_seqno": 33349, "table_properties": {"data_size": 2779283, "index_size": 4154, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2565, "raw_key_size": 18929, "raw_average_key_size": 18, "raw_value_size": 2761688, "raw_average_value_size": 2764, "num_data_blocks": 179, "num_entries": 999, "num_filter_entries": 999, "num_deletions": 504, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760004469, "oldest_key_time": 1760004469, "file_creation_time": 1760004611, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 9512 microseconds, and 7920 cpu microseconds.
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.599874) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #62: 2787071 bytes OK
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.599920) [db/memtable_list.cc:519] [default] Level-0 commit table #62 started
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.600382) [db/memtable_list.cc:722] [default] Level-0 commit table #62: memtable #1 done
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.600406) EVENT_LOG_v1 {"time_micros": 1760004611600396, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.600451) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4257661, prev total WAL file size 4257661, number of live WAL files 2.
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000058.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.601753) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323533' seq:72057594037927935, type:22 .. '6B7600353038' seq:0, type:0; will stop at (end)
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [62(2721KB)], [60(15MB)]
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611601820, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [62], "files_L6": [60], "score": -1, "input_data_size": 19141855, "oldest_snapshot_seqno": -1}
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #63: 6318 keys, 13636272 bytes, temperature: kUnknown
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611644324, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 63, "file_size": 13636272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13594858, "index_size": 24536, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15813, "raw_key_size": 164389, "raw_average_key_size": 26, "raw_value_size": 13481425, "raw_average_value_size": 2133, "num_data_blocks": 975, "num_entries": 6318, "num_filter_entries": 6318, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1760002514, "oldest_key_time": 0, "file_creation_time": 1760004611, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "7f5b1458-47b7-4c0b-a668-6fbde19939d2", "db_session_id": "IGXT8FL5CO7VG5U36Z5B", "orig_file_number": 63, "seqno_to_time_mapping": "N/A"}}
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.644581) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 13636272 bytes
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.644895) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 449.4 rd, 320.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 15.6 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(11.8) write-amplify(4.9) OK, records in: 7345, records dropped: 1027 output_compression: NoCompression
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.644910) EVENT_LOG_v1 {"time_micros": 1760004611644902, "job": 36, "event": "compaction_finished", "compaction_time_micros": 42593, "compaction_time_cpu_micros": 30293, "output_level": 6, "num_output_files": 1, "total_output_size": 13636272, "num_input_records": 7345, "num_output_records": 6318, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611645452, "job": 36, "event": "table_file_deletion", "file_number": 62}
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-2/store.db/000060.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: EVENT_LOG_v1 {"time_micros": 1760004611647933, "job": 36, "event": "table_file_deletion", "file_number": 60}
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.601690) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.648013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.648020) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.648022) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.648024) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:10:11 compute-2 ceph-mon[5983]: rocksdb: (Original Log Time 2025/10/09-10:10:11.648025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct 09 10:10:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:11 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:11 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:12 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:10:12 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct 09 10:10:12 compute-2 ceph-mon[5983]: pgmap v1119: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:10:12 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:10:12 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:10:12 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct 09 10:10:12 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct 09 10:10:12 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:10:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/298435994' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct 09 10:10:12 compute-2 ceph-mon[5983]: from='client.? 192.168.122.10:0/298435994' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct 09 10:10:12 compute-2 nova_compute[163961]: 2025-10-09 10:10:12.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:12 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:12 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:12 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:12.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:12 compute-2 nova_compute[163961]: 2025-10-09 10:10:12.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:12 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:12 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:13 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:13 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:13 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:13.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:13 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:13 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:10:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:10:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:10:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:13 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:10:14 compute-2 ceph-mon[5983]: pgmap v1120: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:10:14 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:14 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:14 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:14.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:14 compute-2 sudo[182570]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:10:14 compute-2 sudo[182570]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:10:14 compute-2 sudo[182570]: pam_unix(sudo:session): session closed for user root
Oct 09 10:10:14 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:14 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:14 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:15 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:15 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:15 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:15.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:15 compute-2 sudo[182595]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Oct 09 10:10:15 compute-2 sudo[182595]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:10:15 compute-2 sudo[182595]: pam_unix(sudo:session): session closed for user root
Oct 09 10:10:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:15 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:15 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:16 compute-2 ceph-mon[5983]: pgmap v1121: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:10:16 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:10:16 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' 
Oct 09 10:10:16 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:16 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:16 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:16.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:16 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:16 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:17 compute-2 nova_compute[163961]: 2025-10-09 10:10:17.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:17 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:17 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:17 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:17.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:17 compute-2 nova_compute[163961]: 2025-10-09 10:10:17.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:17 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:17 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:10:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:10:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:10:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:17 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:10:18 compute-2 ceph-mon[5983]: pgmap v1122: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:10:18 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:18 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:18 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:18.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:18 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:18 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:19 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:19 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:19 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:19.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:19 compute-2 podman[182624]: 2025-10-09 10:10:19.220965888 +0000 UTC m=+0.051826114 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent)
Oct 09 10:10:19 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:19 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:19 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:20 compute-2 ceph-mon[5983]: pgmap v1123: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 1 op/s
Oct 09 10:10:20 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:10:20 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:20 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:20 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:20.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:20 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:20 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:21 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:21 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:21 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:21.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:21 compute-2 podman[182642]: 2025-10-09 10:10:21.227565518 +0000 UTC m=+0.053750743 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct 09 10:10:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:21 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:21 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:10:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:10:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:21 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:10:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:22 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:10:22 compute-2 nova_compute[163961]: 2025-10-09 10:10:22.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:22 compute-2 ceph-mon[5983]: pgmap v1124: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 1 op/s
Oct 09 10:10:22 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:22 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:22 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:22.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:22 compute-2 nova_compute[163961]: 2025-10-09 10:10:22.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:22 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:22 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:23 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:23 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:23 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:23.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:23 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:23 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:24 compute-2 ceph-mon[5983]: pgmap v1125: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:24 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:24 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:24 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:24.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:24 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:24 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:24 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:25 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:25 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:25 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:25.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:25 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:25 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:26 compute-2 ceph-mon[5983]: pgmap v1126: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:26 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:26 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:26 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:26.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:26 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:26 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:10:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:10:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:10:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:27 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:10:27 compute-2 nova_compute[163961]: 2025-10-09 10:10:27.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:27 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:27 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:27 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:27.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:27 compute-2 nova_compute[163961]: 2025-10-09 10:10:27.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:27 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:27 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:28 compute-2 podman[182666]: 2025-10-09 10:10:28.238765442 +0000 UTC m=+0.070060629 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct 09 10:10:28 compute-2 ceph-mon[5983]: pgmap v1127: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:10:28 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:28 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:28 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:28.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:28 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:28 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:29 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:29 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:29 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:29.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:29 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:29 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:29 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:30 compute-2 ceph-mon[5983]: pgmap v1128: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:30 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:30 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:30 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:30.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:30 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:30 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:31 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:31 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:31 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:31.125 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:31 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:31 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:10:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:10:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:31 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:10:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:32 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:10:32 compute-2 nova_compute[163961]: 2025-10-09 10:10:32.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:32 compute-2 ceph-mon[5983]: pgmap v1129: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:10:32 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:32 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:32 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:32.398 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:32 compute-2 nova_compute[163961]: 2025-10-09 10:10:32.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:32 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:32 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:33 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:33 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:33 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:33.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:33 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:33 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:34 compute-2 ceph-mon[5983]: pgmap v1130: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:34 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:34 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:34 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:34.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:34 compute-2 sudo[182696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:10:34 compute-2 sudo[182696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:10:34 compute-2 sudo[182696]: pam_unix(sudo:session): session closed for user root
Oct 09 10:10:34 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:34 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:34 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:35 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:35 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:35 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:35.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:35 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:10:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:35 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:35 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:36 compute-2 ceph-mon[5983]: pgmap v1131: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:36 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:36 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:36 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:36.404 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:36 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:36 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:10:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:10:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:36 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:10:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:37 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:10:37 compute-2 nova_compute[163961]: 2025-10-09 10:10:37.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:37 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:37 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:37 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:37.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:37 compute-2 nova_compute[163961]: 2025-10-09 10:10:37.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:37 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:37 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:38 compute-2 sshd-session[182724]: Accepted publickey for zuul from 192.168.122.10 port 50972 ssh2: ECDSA SHA256:OzuW3C3iujN2/ZLriUmW6zYrqcmz+NupOtPc5vrHRGY
Oct 09 10:10:38 compute-2 systemd[1]: Created slice User Slice of UID 1000.
Oct 09 10:10:38 compute-2 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct 09 10:10:38 compute-2 systemd-logind[800]: New session 44 of user zuul.
Oct 09 10:10:38 compute-2 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct 09 10:10:38 compute-2 systemd[1]: Starting User Manager for UID 1000...
Oct 09 10:10:38 compute-2 systemd[182729]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 10:10:38 compute-2 ceph-mon[5983]: pgmap v1132: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:10:38 compute-2 podman[182730]: 2025-10-09 10:10:38.312645286 +0000 UTC m=+0.065372083 container health_status 00fc63aaa8cad719c09c01fa981256d6e46206c41e098f6e5fe85c9151e0df41 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct 09 10:10:38 compute-2 systemd[182729]: Queued start job for default target Main User Target.
Oct 09 10:10:38 compute-2 systemd[182729]: Created slice User Application Slice.
Oct 09 10:10:38 compute-2 systemd[182729]: Started Mark boot as successful after the user session has run 2 minutes.
Oct 09 10:10:38 compute-2 systemd[182729]: Started Daily Cleanup of User's Temporary Directories.
Oct 09 10:10:38 compute-2 systemd[182729]: Reached target Paths.
Oct 09 10:10:38 compute-2 systemd[182729]: Reached target Timers.
Oct 09 10:10:38 compute-2 systemd[182729]: Starting D-Bus User Message Bus Socket...
Oct 09 10:10:38 compute-2 systemd[182729]: Starting Create User's Volatile Files and Directories...
Oct 09 10:10:38 compute-2 systemd[182729]: Listening on D-Bus User Message Bus Socket.
Oct 09 10:10:38 compute-2 systemd[182729]: Reached target Sockets.
Oct 09 10:10:38 compute-2 systemd[182729]: Finished Create User's Volatile Files and Directories.
Oct 09 10:10:38 compute-2 systemd[182729]: Reached target Basic System.
Oct 09 10:10:38 compute-2 systemd[182729]: Reached target Main User Target.
Oct 09 10:10:38 compute-2 systemd[182729]: Startup finished in 122ms.
Oct 09 10:10:38 compute-2 systemd[1]: Started User Manager for UID 1000.
Oct 09 10:10:38 compute-2 systemd[1]: Started Session 44 of User zuul.
Oct 09 10:10:38 compute-2 sshd-session[182724]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Oct 09 10:10:38 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:38 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:38 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:38.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:38 compute-2 sudo[182763]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp -p container,openstack_edpm,system,storage,virt'
Oct 09 10:10:38 compute-2 sudo[182763]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Oct 09 10:10:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:38 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:38 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:39 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:39 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:39 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:39.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:39 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:39 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:39 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:40 compute-2 ceph-mon[5983]: pgmap v1133: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:40 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:40 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:40 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:40.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:40 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:40 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:41 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:41 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:41 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:41.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:41 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "status"} v 0)
Oct 09 10:10:41 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3566203389' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:10:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:41 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:41 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:10:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:10:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:41 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:10:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:42 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:10:42 compute-2 nova_compute[163961]: 2025-10-09 10:10:42.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:42 compute-2 ceph-mon[5983]: from='client.28538 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:42 compute-2 ceph-mon[5983]: from='client.28544 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:42 compute-2 ceph-mon[5983]: from='client.18693 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:42 compute-2 ceph-mon[5983]: from='client.18699 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:42 compute-2 ceph-mon[5983]: from='client.28559 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:42 compute-2 ceph-mon[5983]: from='client.28294 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:42 compute-2 ceph-mon[5983]: pgmap v1134: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:10:42 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3536492109' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:10:42 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1945846219' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:10:42 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3566203389' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct 09 10:10:42 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:42 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:42 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:42.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:42 compute-2 nova_compute[163961]: 2025-10-09 10:10:42.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:42 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:42 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:43 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:43 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:43 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:43.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:43 compute-2 ovs-vsctl[183043]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct 09 10:10:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:43 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:43 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:44 compute-2 ceph-mon[5983]: pgmap v1135: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:44 compute-2 virtqemud[163507]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct 09 10:10:44 compute-2 virtqemud[163507]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct 09 10:10:44 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:44 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:44 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:44.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:44 compute-2 virtqemud[163507]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct 09 10:10:44 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:44 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: cache status {prefix=cache status} (starting...)
Oct 09 10:10:44 compute-2 lvm[183322]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct 09 10:10:44 compute-2 lvm[183322]: VG ceph_vg0 finished
Oct 09 10:10:44 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: client ls {prefix=client ls} (starting...)
Oct 09 10:10:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:44 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:44 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:45 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:45 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:45 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:45.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:45 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: damage ls {prefix=damage ls} (starting...)
Oct 09 10:10:45 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "report"} v 0)
Oct 09 10:10:45 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1073987092' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:10:45 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump loads {prefix=dump loads} (starting...)
Oct 09 10:10:45 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct 09 10:10:45 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct 09 10:10:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:45 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:45 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:46 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct 09 10:10:46 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct 09 10:10:46 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct 09 10:10:46 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4029739969' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 09 10:10:46 compute-2 ceph-mon[5983]: pgmap v1136: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:46 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1073987092' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:10:46 compute-2 ceph-mon[5983]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:10:46 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2093065470' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:10:46 compute-2 ceph-mon[5983]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:10:46 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3532518201' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:10:46 compute-2 ceph-mon[5983]: from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct 09 10:10:46 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1403612783' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:10:46 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/262828082' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:10:46 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3048919801' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct 09 10:10:46 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/4029739969' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 09 10:10:46 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct 09 10:10:46 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1462442156' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 09 10:10:46 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:46 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:46 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:46.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:46 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config log"} v 0)
Oct 09 10:10:46 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/1160818251' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 09 10:10:46 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct 09 10:10:46 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct 09 10:10:46 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Oct 09 10:10:46 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/895555260' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 09 10:10:46 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: ops {prefix=ops} (starting...)
Oct 09 10:10:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:46 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:46 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:10:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:10:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:46 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:10:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:47 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:10:47 compute-2 nova_compute[163961]: 2025-10-09 10:10:47.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:47 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Oct 09 10:10:47 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2256829641' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 09 10:10:47 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:47 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:47 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:47.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:47 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 09 10:10:47 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4091719402' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:10:47 compute-2 ceph-mon[5983]: from='client.18735 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:47 compute-2 ceph-mon[5983]: from='client.18729 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:47 compute-2 ceph-mon[5983]: from='client.28610 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:47 compute-2 ceph-mon[5983]: from='client.28339 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:47 compute-2 ceph-mon[5983]: from='client.28625 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:47 compute-2 ceph-mon[5983]: from='client.28631 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:47 compute-2 ceph-mon[5983]: from='client.18777 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:47 compute-2 ceph-mon[5983]: from='client.28369 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:47 compute-2 ceph-mon[5983]: from='client.28655 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:47 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1462442156' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 09 10:10:47 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1160818251' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct 09 10:10:47 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/895555260' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 09 10:10:47 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2170318210' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 09 10:10:47 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2239107020' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct 09 10:10:47 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2182137390' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 09 10:10:47 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2825927109' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 09 10:10:47 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2256829641' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct 09 10:10:47 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 09 10:10:47 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3843149761' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:10:47 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: session ls {prefix=session ls} (starting...)
Oct 09 10:10:47 compute-2 nova_compute[163961]: 2025-10-09 10:10:47.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:47 compute-2 ceph-mds[13089]: mds.cephfs.compute-2.zfggbi asok_command: status {prefix=status} (starting...)
Oct 09 10:10:47 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 09 10:10:47 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3435942827' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:10:47 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "features"} v 0)
Oct 09 10:10:47 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2847655966' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:10:47 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 09 10:10:47 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3743974822' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:10:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:47 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:47 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:48 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 09 10:10:48 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/382053438' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.18801 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.28402 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.28670 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.28432 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: pgmap v1137: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.18840 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.28706 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/4091719402' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.18858 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.28462 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3843149761' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3791647089' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/536044603' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3435942827' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2847655966' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3107918271' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3743974822' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1639166459' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1972161551' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1803218710' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/4283214122' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/838622420' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/382053438' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/4281246886' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct 09 10:10:48 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:48 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:48 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:48.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:48 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 09 10:10:48 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3590603568' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:10:48 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Oct 09 10:10:48 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2856552148' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 09 10:10:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:48 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:48 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Oct 09 10:10:49 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3505807830' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 09 10:10:49 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:49 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:49 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:49.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:49 compute-2 ceph-mon[5983]: from='client.28733 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2199241439' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:10:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/4117270880' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:10:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3590603568' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:10:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/4279356677' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 09 10:10:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/4256149710' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 09 10:10:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/45453420' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 09 10:10:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2856552148' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 09 10:10:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3505807830' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct 09 10:10:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/717780337' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct 09 10:10:49 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3524732949' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:10:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Oct 09 10:10:49 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/428440663' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 09 10:10:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Oct 09 10:10:49 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2743014829' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:10:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:49 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Oct 09 10:10:49 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1636638593' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:10:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:49 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:49 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 09 10:10:50 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3428916114' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:10:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Oct 09 10:10:50 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2758321859' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:10:50 compute-2 podman[184192]: 2025-10-09 10:10:50.256128681 +0000 UTC m=+0.085319597 container health_status aebe6ca56e354ee4e6606c3f5251eb7ed34f39586e3b0598e089ea531e772335 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct 09 10:10:50 compute-2 ceph-mon[5983]: from='client.18933 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:50 compute-2 ceph-mon[5983]: from='client.28537 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:50 compute-2 ceph-mon[5983]: from='client.18948 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:50 compute-2 ceph-mon[5983]: pgmap v1138: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/489993274' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:10:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/428440663' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 09 10:10:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2270317148' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 09 10:10:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2208863461' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct 09 10:10:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2743014829' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:10:50 compute-2 ceph-mon[5983]: from='mgr.14562 192.168.122.100:0/3475692050' entity='mgr.compute-0.lwqgfy' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Oct 09 10:10:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2519403560' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:10:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3875743926' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:10:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1636638593' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct 09 10:10:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3632590972' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:10:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3428916114' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:10:50 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2758321859' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct 09 10:10:50 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:50 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:50 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:50.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Oct 09 10:10:50 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2786971522' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:10:50 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Oct 09 10:10:50 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1824377817' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:10:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:50 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:50 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:53.679264+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:23.291169+0000 osd.2 (osd.2) 174 : cluster [DBG] 8.2 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:23.301754+0000 osd.2 (osd.2) 175 : cluster [DBG] 8.2 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 1654784 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 175)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:23.291169+0000 osd.2 (osd.2) 174 : cluster [DBG] 8.2 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:23.301754+0000 osd.2 (osd.2) 175 : cluster [DBG] 8.2 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:54.679490+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:24.296357+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.3 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:24.306574+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.3 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 1654784 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 177)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:24.296357+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.3 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:24.306574+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.3 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:55.679716+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:25.320576+0000 osd.2 (osd.2) 178 : cluster [DBG] 9.9 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:25.331086+0000 osd.2 (osd.2) 179 : cluster [DBG] 9.9 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 1638400 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 179)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:25.320576+0000 osd.2 (osd.2) 178 : cluster [DBG] 9.9 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:25.331086+0000 osd.2 (osd.2) 179 : cluster [DBG] 9.9 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:56.680029+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:26.342384+0000 osd.2 (osd.2) 180 : cluster [DBG] 8.3 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:26.352870+0000 osd.2 (osd.2) 181 : cluster [DBG] 8.3 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 116 heartbeat osd_stat(store_statfs(0x4fcabb000/0x0/0x4ffc00000, data 0xd0256/0x160000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 116 handle_osd_map epochs [117,118], i have 116, src has [1,118]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 116 handle_osd_map epochs [117,118], i have 118, src has [1,118]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 1589248 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 777465 data_alloc: 218103808 data_used: 24576
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 181)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:26.342384+0000 osd.2 (osd.2) 180 : cluster [DBG] 8.3 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:26.352870+0000 osd.2 (osd.2) 181 : cluster [DBG] 8.3 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.7 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 12.7 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:57.680211+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:27.343596+0000 osd.2 (osd.2) 182 : cluster [DBG] 12.7 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:27.354093+0000 osd.2 (osd.2) 183 : cluster [DBG] 12.7 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71802880 unmapped: 1556480 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 183)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:27.343596+0000 osd.2 (osd.2) 182 : cluster [DBG] 12.7 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:27.354093+0000 osd.2 (osd.2) 183 : cluster [DBG] 12.7 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:58.680417+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:28.323272+0000 osd.2 (osd.2) 184 : cluster [DBG] 4.3 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:28.333794+0000 osd.2 (osd.2) 185 : cluster [DBG] 4.3 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 1548288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 185)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:28.323272+0000 osd.2 (osd.2) 184 : cluster [DBG] 4.3 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:28.333794+0000 osd.2 (osd.2) 185 : cluster [DBG] 4.3 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.d deep-scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.d deep-scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:38:59.680590+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:29.303396+0000 osd.2 (osd.2) 186 : cluster [DBG] 8.d deep-scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:29.313994+0000 osd.2 (osd.2) 187 : cluster [DBG] 8.d deep-scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71811072 unmapped: 1548288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 187)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:29.303396+0000 osd.2 (osd.2) 186 : cluster [DBG] 8.d deep-scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:29.313994+0000 osd.2 (osd.2) 187 : cluster [DBG] 8.d deep-scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _renew_subs
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.1f deep-scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.1f deep-scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:00.680759+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:30.334456+0000 osd.2 (osd.2) 188 : cluster [DBG] 8.1f deep-scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:30.348559+0000 osd.2 (osd.2) 189 : cluster [DBG] 8.1f deep-scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 1540096 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 189)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:30.334456+0000 osd.2 (osd.2) 188 : cluster [DBG] 8.1f deep-scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:30.348559+0000 osd.2 (osd.2) 189 : cluster [DBG] 8.1f deep-scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:01.680884+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:31.291998+0000 osd.2 (osd.2) 190 : cluster [DBG] 8.16 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:31.302594+0000 osd.2 (osd.2) 191 : cluster [DBG] 8.16 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71819264 unmapped: 1540096 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 789067 data_alloc: 218103808 data_used: 40960
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 191)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:31.291998+0000 osd.2 (osd.2) 190 : cluster [DBG] 8.16 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:31.302594+0000 osd.2 (osd.2) 191 : cluster [DBG] 8.16 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:02.681054+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:32.256933+0000 osd.2 (osd.2) 192 : cluster [DBG] 11.3 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:32.271062+0000 osd.2 (osd.2) 193 : cluster [DBG] 11.3 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 120 heartbeat osd_stat(store_statfs(0x4fcab0000/0x0/0x4ffc00000, data 0xd813e/0x16c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71827456 unmapped: 1531904 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.005482674s of 10.049218178s, submitted: 44
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 193)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:32.256933+0000 osd.2 (osd.2) 192 : cluster [DBG] 11.3 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:32.271062+0000 osd.2 (osd.2) 193 : cluster [DBG] 11.3 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:03.681288+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:33.222293+0000 osd.2 (osd.2) 194 : cluster [DBG] 8.6 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:33.232858+0000 osd.2 (osd.2) 195 : cluster [DBG] 8.6 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1f deep-scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71843840 unmapped: 1515520 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1f deep-scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 195)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:33.222293+0000 osd.2 (osd.2) 194 : cluster [DBG] 8.6 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:33.232858+0000 osd.2 (osd.2) 195 : cluster [DBG] 8.6 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:04.681457+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:34.190940+0000 osd.2 (osd.2) 196 : cluster [DBG] 10.1f deep-scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:34.219159+0000 osd.2 (osd.2) 197 : cluster [DBG] 10.1f deep-scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.f scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 1499136 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.f scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 197)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:34.190940+0000 osd.2 (osd.2) 196 : cluster [DBG] 10.1f deep-scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:34.219159+0000 osd.2 (osd.2) 197 : cluster [DBG] 10.1f deep-scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:05.681614+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:35.187481+0000 osd.2 (osd.2) 198 : cluster [DBG] 10.f scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:35.222782+0000 osd.2 (osd.2) 199 : cluster [DBG] 10.f scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 1499136 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 199)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:35.187481+0000 osd.2 (osd.2) 198 : cluster [DBG] 10.f scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:35.222782+0000 osd.2 (osd.2) 199 : cluster [DBG] 10.f scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:06.681781+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:36.153594+0000 osd.2 (osd.2) 200 : cluster [DBG] 10.4 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:36.199500+0000 osd.2 (osd.2) 201 : cluster [DBG] 10.4 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fcaa9000/0x0/0x4ffc00000, data 0xdc316/0x172000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 122 handle_osd_map epochs [123,124], i have 123, src has [1,124]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 1458176 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 811691 data_alloc: 218103808 data_used: 45056
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 201)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:36.153594+0000 osd.2 (osd.2) 200 : cluster [DBG] 10.4 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:36.199500+0000 osd.2 (osd.2) 201 : cluster [DBG] 10.4 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:07.681944+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:37.192787+0000 osd.2 (osd.2) 202 : cluster [DBG] 10.1 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  will send 2025-10-09T09:39:37.228093+0000 osd.2 (osd.2) 203 : cluster [DBG] 10.1 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 1449984 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client handle_log_ack log(last 203)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:37.192787+0000 osd.2 (osd.2) 202 : cluster [DBG] 10.1 scrub starts
Oct 09 10:10:51 compute-2 ceph-osd[11347]: log_client  logged 2025-10-09T09:39:37.228093+0000 osd.2 (osd.2) 203 : cluster [DBG] 10.1 scrub ok
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:08.682100+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 1433600 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:09.682204+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 125 heartbeat osd_stat(store_statfs(0x4fcaa0000/0x0/0x4ffc00000, data 0xe23ae/0x17b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 125 handle_osd_map epochs [126,126], i have 125, src has [1,126]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 1433600 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:10.682340+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 1417216 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 126 handle_osd_map epochs [126,127], i have 126, src has [1,127]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e(unlocked)] enter Initial
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=0 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000079 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=0 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000016 1 0.000034
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000122 1 0.000048
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000029 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000161 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 127 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:11.682520+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 1392640 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 820599 data_alloc: 218103808 data_used: 53248
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.767282 2 0.000052
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.767477 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.767500 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=127) [2] r=0 lpr=127 pi=[70,127)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000084 1 0.000131
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 128 pg[10.1e( empty local-lis/les=0/0 n=0 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 128 handle_osd_map epochs [128,128], i have 128, src has [1,128]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:12.682690+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 1376256 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:13.682814+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72015872 unmapped: 1343488 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _renew_subs
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 128 handle_osd_map epochs [129,129], i have 128, src has [1,129]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.973302841s of 11.008710861s, submitted: 32
Oct 09 10:10:51 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.920757 5 0.000039
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 0'0 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=70/70 les/c/f=71/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=97) [2] r=0 lpr=97 crt=40'1059 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 45.663154 94 0.002040
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=97) [2] r=0 lpr=97 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active 45.664701 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=97) [2] r=0 lpr=97 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary 46.669357 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=97) [2] r=0 lpr=97 crt=40'1059 mlcod 0'0 active mbc={}] exit Started 46.669400 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=97) [2] r=0 lpr=97 crt=40'1059 mlcod 0'0 active mbc={}] enter Reset
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336996078s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 active pruub 227.930297852s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] exit Reset 0.000316 1 0.000677
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] enter Started
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] enter Start
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] exit Start 0.000047 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129 pruub=10.336800575s) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 227.930297852s@ mbc={}] enter Started/Stray
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 129 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 40'632 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002385 4 0.000096
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 40'632 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 40'632 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000056 1 0.000045
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 lc 40'632 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035652 1 0.000093
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 129 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.069951 1 0.000072
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.108230 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] exit Started 2.029027 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=128) [2]/[0] r=-1 lpr=128 pi=[70,128)/1 luod=0'0 crt=40'1059 mlcod 0'0 active+remapped mbc={}] enter Reset
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 luod=0'0 crt=40'1059 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Reset 0.000197 1 0.000281
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Start
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.107543 3 0.000118
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 0.107638 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=129) [0] r=-1 lpr=129 pi=[97,129)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Reset
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] exit Start 0.000128 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Reset 0.000101 1 0.000555
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Start
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000027 1 0.000032
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000025 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000185 1 0.000220
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=0/0 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: merge_log_dups log.dups.size()=0olog.dups.size()=29
Oct 09 10:10:51 compute-2 ceph-osd[11347]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=29
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001188 3 0.000093
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 130 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:14.682969+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 130 heartbeat osd_stat(store_statfs(0x4fca96000/0x0/0x4ffc00000, data 0xe8562/0x184000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72040448 unmapped: 1318912 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 130 handle_osd_map epochs [130,131], i have 130, src has [1,131]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.015862 4 0.000066
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.015980 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=97/98 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.014927 2 0.000066
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.016367 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=128/129 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=128/70 les/c/f=129/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/70 les/c/f=131/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001208 3 0.000092
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/70 les/c/f=131/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/70 les/c/f=131/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1e( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/70 les/c/f=131/71/0 sis=130) [2] r=0 lpr=130 pi=[70,130)/1 crt=40'1059 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=97/97 les/c/f=98/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.188305 5 0.000535
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000101 1 0.000097
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000646 1 0.000096
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.069841 2 0.000086
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 131 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:15.683135+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 131 heartbeat osd_stat(store_statfs(0x4fca90000/0x0/0x4ffc00000, data 0xec658/0x18a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 131 handle_osd_map epochs [132,132], i have 131, src has [1,132]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.466386 1 0.000109
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary/Active 0.725557 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started/Primary 1.741565 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] exit Started 1.741587 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=130) [0]/[2] async=[0] r=0 lpr=130 pi=[97,130)/1 crt=40'1059 mlcod 40'1059 active+remapped mbc={255={}}] enter Reset
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462431908s) [0] async=[0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 40'1059 active pruub 234.905334473s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] exit Reset 0.000091 1 0.000158
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] enter Started
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] enter Start
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] state<Start>: transitioning to Stray
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] exit Start 0.000006 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 132 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132 pruub=15.462383270s) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY pruub 234.905334473s@ mbc={}] enter Started/Stray
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 132 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72056832 unmapped: 1302528 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 132 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd7274f00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:16.683290+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 132 handle_osd_map epochs [132,133], i have 132, src has [1,133]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.007853 7 0.000095
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000067 1 0.000098
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 DELETING pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.038244 2 0.000163
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.038378 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 pg_epoch: 133 pg[10.1f( v 40'1059 (0'0,40'1059] lb MIN local-lis/les=130/131 n=5 ec=53/34 lis/c=130/97 les/c/f=131/98/0 sis=132) [0] r=-1 lpr=132 pi=[97,132)/1 crt=40'1059 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.046311 0 0.000000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1228800 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835697 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:17.683435+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72130560 unmapped: 1228800 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:18.683578+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72146944 unmapped: 1212416 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:19.683733+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72187904 unmapped: 1171456 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:20.683907+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72204288 unmapped: 1155072 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:21.684050+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72212480 unmapped: 1146880 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835697 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:22.684183+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72220672 unmapped: 1138688 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:23.684334+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72228864 unmapped: 1130496 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:24.684445+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1122304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:25.684581+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1122304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:26.684717+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1122304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835697 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:27.684899+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1114112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:28.685015+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72237056 unmapped: 1122304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:29.685168+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1114112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:30.685316+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72245248 unmapped: 1114112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:31.685429+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72253440 unmapped: 1105920 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835697 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:32.685557+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5fc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 1097728 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd6ded0e0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:33.685701+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 ms_handle_reset con 0x55bdd6d47000 session 0x55bdd6d6f2c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72261632 unmapped: 1097728 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:34.685857+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72269824 unmapped: 1089536 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:35.685987+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72294400 unmapped: 1064960 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:36.686102+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 1056768 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835697 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:37.686230+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 1056768 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:38.686349+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.992691040s of 25.019613266s, submitted: 36
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 1040384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:39.686464+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 1040384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:40.686613+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72318976 unmapped: 1040384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:41.686730+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 1032192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835106 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:42.686855+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72327168 unmapped: 1032192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:43.687023+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 1024000 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca89000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:44.687143+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72335360 unmapped: 1024000 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:45.687253+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72343552 unmapped: 1015808 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:46.687407+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72351744 unmapped: 1007616 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 835778 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:47.687519+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 999424 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:48.687648+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72359936 unmapped: 999424 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:49.687802+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c9000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.494864464s of 10.498138428s, submitted: 2
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72368128 unmapped: 991232 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:50.687970+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72376320 unmapped: 983040 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:51.688091+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 974848 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:52.688230+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72384512 unmapped: 974848 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:53.688345+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 942080 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:54.688453+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd5d7f0e0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72417280 unmapped: 942080 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:55.688598+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 933888 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:56.688722+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 933888 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:57.689425+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 933888 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:58.689551+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 933888 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:39:59.689678+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72425472 unmapped: 933888 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:00.689818+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 917504 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:01.689983+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72441856 unmapped: 917504 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:02.690136+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72450048 unmapped: 909312 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:03.690274+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 901120 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:04.690403+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72458240 unmapped: 901120 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:05.690502+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 892928 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:06.690641+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 892928 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:07.690790+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72466432 unmapped: 892928 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:08.690925+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 868352 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:09.691062+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72491008 unmapped: 868352 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:10.691204+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72499200 unmapped: 860160 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:11.691304+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 851968 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:12.691412+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72507392 unmapped: 851968 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:13.691514+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:14.691654+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:15.691773+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:16.691883+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 835584 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:17.692009+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 835584 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:18.692111+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72548352 unmapped: 811008 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:19.692209+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 802816 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:20.692332+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 794624 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:21.692448+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 794624 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:22.692544+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 794624 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:23.692676+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 794624 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:24.692814+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 786432 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:25.692885+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 778240 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:26.692998+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 761856 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:27.693105+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 761856 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:28.693206+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 745472 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:29.693341+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 745472 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:30.693456+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72613888 unmapped: 745472 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:31.693563+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 737280 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:32.693666+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 737280 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:33.693762+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:34.693891+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:35.693995+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 712704 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:36.694096+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 712704 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:37.694197+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 712704 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:38.694294+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 688128 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:39.694392+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 688128 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:40.694508+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72671232 unmapped: 688128 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:41.694611+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:42.694720+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:43.694865+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72515584 unmapped: 843776 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:44.694996+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72523776 unmapped: 835584 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:45.695130+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72531968 unmapped: 827392 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:46.695261+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 819200 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:47.695381+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72540160 unmapped: 819200 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:48.695543+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72556544 unmapped: 802816 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:49.695663+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72564736 unmapped: 794624 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:50.695830+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 786432 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:51.695985+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 786432 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:52.696096+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 786432 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:53.696231+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72572928 unmapped: 786432 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:54.696364+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 778240 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:55.696496+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72581120 unmapped: 778240 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:56.696655+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 770048 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:57.696814+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 770048 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:58.696985+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72589312 unmapped: 770048 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:40:59.697119+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 761856 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:00.697272+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72597504 unmapped: 761856 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:01.697401+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 753664 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:02.697513+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72605696 unmapped: 753664 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:03.697659+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72622080 unmapped: 737280 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:04.697813+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 729088 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:05.697957+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 729088 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:06.698094+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:07.698257+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:08.698394+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72638464 unmapped: 720896 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:09.698509+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72646656 unmapped: 712704 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:10.698669+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 704512 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:11.698809+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 696320 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837290 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:12.698890+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 696320 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:13.699029+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 679936 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:14.699142+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72679424 unmapped: 679936 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 85.944450378s of 85.945846558s, submitted: 1
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:15.699256+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 671744 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:16.699407+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 663552 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:17.699586+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 663552 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:18.699715+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 655360 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:19.699876+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 647168 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:20.700009+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 647168 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:21.700121+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 638976 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:22.700285+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 638976 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:23.700410+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 630784 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:24.700532+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 622592 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:25.700654+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 622592 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:26.700790+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 614400 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:27.700903+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 614400 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:28.701011+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72753152 unmapped: 606208 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:29.701122+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 589824 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:30.701266+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 589824 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:31.701389+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 589824 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:32.701495+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 581632 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:33.701600+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72785920 unmapped: 573440 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:34.701703+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 565248 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:35.701807+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 565248 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:36.701921+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 565248 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:37.702024+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 557056 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:38.702153+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 565248 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:39.702259+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 557056 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:40.702387+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 557056 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:41.702496+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 557056 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:42.702616+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 548864 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:43.702757+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 540672 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:44.702904+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 540672 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:45.703028+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 532480 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:46.703173+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 532480 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:47.703328+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 524288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:48.703443+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 524288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:49.703573+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72835072 unmapped: 524288 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:50.704432+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 516096 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:51.704586+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72843264 unmapped: 516096 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:52.704728+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 ms_handle_reset con 0x55bdd50c9000 session 0x55bdd5d7ed20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 507904 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:53.704888+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 507904 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:54.705017+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 491520 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:55.705149+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 491520 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:56.705290+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 491520 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:57.705385+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 466944 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:58.705489+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 466944 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:41:59.705586+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 466944 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:00.705731+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 458752 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:01.705855+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72900608 unmapped: 458752 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:02.705962+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 442368 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:03.706061+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 434176 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:04.706171+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 434176 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:05.706291+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 425984 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:06.706403+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 425984 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:07.706509+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 417792 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:08.706622+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 409600 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:09.706718+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c9000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 54.127048492s of 54.128883362s, submitted: 1
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72949760 unmapped: 409600 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:10.706831+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 401408 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:11.706936+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 401408 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:12.707074+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 393216 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:13.707168+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 385024 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:14.707320+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 376832 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:15.707412+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72982528 unmapped: 376832 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:16.707516+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 368640 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:17.707643+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 360448 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:18.707734+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 344064 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:19.707870+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 344064 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:20.707987+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 344064 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:21.708092+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 335872 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:22.708186+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 327680 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:23.708280+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 319488 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:24.708372+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 319488 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:25.708470+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 311296 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:26.708594+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 311296 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:27.708708+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73048064 unmapped: 311296 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:28.708803+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 303104 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:29.708906+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 303104 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:30.709063+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 303104 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:31.709155+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 294912 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:32.709244+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 294912 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:33.709350+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 286720 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:34.709458+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 286720 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:35.709549+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73072640 unmapped: 286720 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:36.709683+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73080832 unmapped: 278528 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:37.709779+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 270336 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:38.709891+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 253952 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:39.709984+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 245760 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:40.710084+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 245760 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:41.710211+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:42.710329+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73121792 unmapped: 237568 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:43.710381+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 229376 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:44.710807+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 229376 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:45.710971+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 221184 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:46.711102+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73138176 unmapped: 221184 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:47.711360+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 212992 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:48.711458+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73146368 unmapped: 212992 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:49.711555+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 204800 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:50.711783+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 188416 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:51.711884+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 188416 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:52.712058+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 180224 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:53.712193+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73187328 unmapped: 172032 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:54.712503+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 155648 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:55.712637+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 155648 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:56.712740+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 155648 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:57.712858+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 147456 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:58.713018+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73220096 unmapped: 139264 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:42:59.713254+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73228288 unmapped: 131072 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:00.713428+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 114688 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:01.713562+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 114688 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:02.713725+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 106496 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:03.713865+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73252864 unmapped: 106496 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:04.713984+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 98304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:05.714089+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 98304 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:06.714202+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 90112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:07.714359+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 90112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:08.714499+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 90112 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:09.714632+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 81920 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:10.714759+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 73728 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:11.714893+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 73728 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:12.715228+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 65536 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:13.715348+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 57344 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:14.715446+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 49152 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:15.715561+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 40960 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:16.715657+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73318400 unmapped: 40960 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:17.715890+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73326592 unmapped: 32768 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:18.715990+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73334784 unmapped: 24576 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:19.716097+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 16384 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:20.716234+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 8192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:21.716334+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 8192 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:22.716456+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 0 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:23.716559+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 0 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:24.716660+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 0 heap: 73359360 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:25.716794+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1032192 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:26.716945+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1032192 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:27.717050+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 1032192 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:28.717168+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1024000 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:29.717282+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73383936 unmapped: 1024000 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:30.717398+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1015808 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:31.717501+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 1015808 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:32.717608+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 1007616 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:33.717698+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73416704 unmapped: 991232 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:34.717800+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 983040 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:35.717933+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 974848 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:36.718040+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 974848 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:37.718929+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 974848 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:38.719076+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 966656 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:39.719235+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 958464 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:40.719418+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 950272 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:41.719537+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 950272 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:42.719690+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 950272 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:43.719807+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 942080 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:44.719910+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 942080 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:45.720005+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 925696 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:46.720144+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 925696 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:47.720251+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 917504 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:48.720386+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 901120 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:49.720517+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 901120 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:50.720644+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 892928 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:51.720748+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73515008 unmapped: 892928 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:52.720856+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 884736 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:53.720949+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 884736 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:54.721055+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 868352 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:55.721174+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 868352 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:56.721299+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 868352 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:57.721423+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 860160 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:58.721533+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 860160 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:43:59.721641+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 851968 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:00.721772+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 851968 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:01.721885+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 851968 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:02.721989+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 843776 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:03.722107+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 843776 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:04.722222+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 843776 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:05.722310+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 835584 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:06.722400+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73572352 unmapped: 835584 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:07.722504+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 827392 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:08.722604+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 802816 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:09.722730+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 802816 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:10.722926+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 786432 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:11.723026+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 786432 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:12.723140+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 786432 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:13.723246+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 778240 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:14.723356+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 778240 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:15.723457+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 770048 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:16.723574+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 770048 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:17.723665+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 770048 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:18.723769+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 761856 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:19.723874+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 753664 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:20.723995+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 745472 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:21.724096+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 745472 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:22.724193+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 745472 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:23.724298+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 720896 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:24.724399+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 720896 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:25.724528+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 712704 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:26.724648+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 712704 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:27.724747+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 704512 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:28.724857+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 688128 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:29.724957+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 688128 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:30.725083+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 671744 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:31.725187+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 671744 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:32.725335+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 663552 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:33.725434+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 671744 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:34.725568+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 663552 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:35.725673+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 663552 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:36.725784+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 663552 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:37.725918+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 655360 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:38.726070+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 647168 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:39.726181+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 647168 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:40.726318+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 638976 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:41.726455+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 638976 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:42.726567+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 630784 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:43.726691+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 622592 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:44.726781+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 614400 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:45.726875+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 614400 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:46.726971+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 614400 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:47.727104+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 606208 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:48.727238+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 598016 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:49.727352+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 598016 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:50.727472+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 589824 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:51.727576+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 589824 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:52.727688+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 581632 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:53.727795+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 581632 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:54.727893+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 581632 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:55.727985+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 573440 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:56.728077+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 565248 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:57.728165+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 565248 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:58.728257+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 548864 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:44:59.728366+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 548864 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:00.728471+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 548864 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:01.728585+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 540672 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:02.728683+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 540672 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:03.728776+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 516096 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:04.728882+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 516096 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:05.728979+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 516096 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:06.729073+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 516096 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:07.729164+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 516096 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:08.729267+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 499712 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:09.729401+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 499712 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:10.729518+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 491520 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:11.729625+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 491520 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:12.729724+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 491520 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:13.729822+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 483328 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:14.729926+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 483328 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:15.730010+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 466944 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:16.730105+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 466944 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:17.730217+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 458752 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:18.730308+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 458752 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:19.730410+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 458752 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:20.730529+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 450560 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:21.730631+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 450560 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:22.730721+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 442368 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:23.730818+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 442368 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:24.730883+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 442368 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:25.730981+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 434176 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:26.731079+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 434176 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:27.731167+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 434176 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:28.731262+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 425984 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:29.731411+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 425984 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:30.731594+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 417792 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:31.731746+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 417792 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:32.731868+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 409600 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:33.731983+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 409600 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:34.732107+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 409600 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:35.732259+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 409600 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 5980 writes, 26K keys, 5980 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5980 writes, 983 syncs, 6.08 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5980 writes, 26K keys, 5980 commit groups, 1.0 writes per commit group, ingest: 19.15 MB, 0.03 MB/s
                                           Interval WAL: 5980 writes, 983 syncs, 6.08 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a590#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a590#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a590#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.6e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:36.732400+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 344064 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:37.732550+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 344064 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:38.732706+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 335872 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:39.732859+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 335872 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:40.733039+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 319488 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:41.733188+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 319488 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:42.733324+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 319488 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:43.733437+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 303104 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:44.733588+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 303104 heap: 74407936 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 215.829528809s of 215.830673218s, submitted: 1
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:45.733683+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75366400 unmapped: 90112 heap: 75456512 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:46.733806+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1048576 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:47.733909+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75456512 unmapped: 1048576 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:48.734470+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:49.734575+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:50.734722+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:51.734813+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:52.734881+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:53.734975+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:54.735109+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:55.735201+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:56.735345+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:57.735440+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:58.735597+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:45:59.735691+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:00.735798+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:01.735873+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:02.735962+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:03.736056+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:04.736155+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:05.737455+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:06.737567+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:07.737677+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75464704 unmapped: 1040384 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:08.737790+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 1032192 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:09.737887+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75472896 unmapped: 1032192 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:10.738001+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 1024000 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:11.738092+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 1024000 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:12.738184+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75481088 unmapped: 1024000 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:13.738282+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 1015808 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:14.738392+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75489280 unmapped: 1015808 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:15.738494+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 1007616 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:16.738592+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 1007616 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:17.738686+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75497472 unmapped: 1007616 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:18.738784+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 991232 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:19.738882+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75513856 unmapped: 991232 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:20.738996+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 974848 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:21.739092+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75530240 unmapped: 974848 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:22.739187+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 966656 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:23.739317+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 966656 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:24.739421+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75538432 unmapped: 966656 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:25.739519+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 958464 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:26.739627+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75546624 unmapped: 958464 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:27.739741+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 950272 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:28.739872+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 950272 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:29.740023+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75554816 unmapped: 950272 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:30.740187+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 942080 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:31.740290+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75563008 unmapped: 942080 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:32.740387+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75571200 unmapped: 933888 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:33.740491+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75579392 unmapped: 925696 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:34.740598+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 917504 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:35.740713+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 917504 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:36.740813+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75587584 unmapped: 917504 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:37.740907+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 909312 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:38.741003+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 909312 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:39.741100+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75595776 unmapped: 909312 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:40.741201+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 884736 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:41.741289+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75620352 unmapped: 884736 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:42.741389+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 876544 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:43.741484+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 876544 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:44.741581+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75628544 unmapped: 876544 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:45.741675+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 868352 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:46.741779+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 868352 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:47.741891+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75636736 unmapped: 868352 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:48.741994+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 860160 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:49.742087+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75644928 unmapped: 860160 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:50.742201+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 851968 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:51.742294+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75653120 unmapped: 851968 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:52.742388+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75661312 unmapped: 843776 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:53.742481+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 835584 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:54.742590+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75669504 unmapped: 835584 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:55.742686+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 827392 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:56.742783+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75677696 unmapped: 827392 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:57.742893+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 819200 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:58.742999+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 819200 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:46:59.743093+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 819200 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:00.743206+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 794624 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:01.743308+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75710464 unmapped: 794624 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:02.743414+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75718656 unmapped: 786432 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:03.743516+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 778240 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:04.743687+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75726848 unmapped: 778240 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:05.743786+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 770048 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:06.743899+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75735040 unmapped: 770048 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:07.744018+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 761856 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:08.744115+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75743232 unmapped: 761856 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:09.744203+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 753664 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:10.744308+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 753664 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:11.744400+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75751424 unmapped: 753664 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:12.744505+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 745472 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:13.744614+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75759616 unmapped: 745472 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:14.744717+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 737280 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:15.744818+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 737280 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:51 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:51 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:51.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:16.744875+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75767808 unmapped: 737280 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:17.745010+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 729088 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:18.745116+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75776000 unmapped: 729088 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:19.745260+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75784192 unmapped: 720896 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:20.745432+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 704512 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:21.745567+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75800576 unmapped: 704512 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:22.745650+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 696320 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:23.745739+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75808768 unmapped: 696320 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:24.745864+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 688128 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:25.745973+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 688128 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:26.746071+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75816960 unmapped: 688128 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:27.746164+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 679936 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:28.746271+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75825152 unmapped: 679936 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:29.746647+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 671744 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:30.746774+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75833344 unmapped: 671744 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:31.746933+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 663552 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:32.747027+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 663552 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:33.747123+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75841536 unmapped: 663552 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:34.747282+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:35.747401+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:36.747502+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:37.747606+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:38.747715+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:39.747820+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75849728 unmapped: 655360 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:40.747985+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 630784 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:41.748091+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 630784 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:42.748189+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75874304 unmapped: 630784 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:43.748320+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 622592 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:44.748416+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 622592 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:45.748513+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 622592 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:46.748615+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 622592 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:47.748733+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75882496 unmapped: 622592 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:48.748854+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:49.748943+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:50.749053+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:51.749148+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:52.749248+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:53.749340+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:54.749441+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:55.749542+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:56.749649+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:57.749750+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:58.749868+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:47:59.749979+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75890688 unmapped: 614400 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:00.750095+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75907072 unmapped: 598016 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:01.750199+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:02.750359+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:03.750453+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:04.750555+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:05.750657+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:06.750771+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:07.750865+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:08.750953+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:09.751054+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:10.751164+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:11.751255+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75915264 unmapped: 589824 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:12.751357+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75923456 unmapped: 581632 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:13.751482+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:14.751585+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:15.751688+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:16.751816+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:17.751943+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:18.752054+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:19.752148+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75931648 unmapped: 573440 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:20.752262+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:21.752356+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:22.752455+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:23.752550+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:24.752647+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:25.752741+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:26.752867+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:27.753019+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75948032 unmapped: 557056 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:28.753120+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 548864 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:29.753252+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 548864 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:30.753372+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 548864 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:31.753471+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 548864 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:32.753573+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75956224 unmapped: 548864 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:33.753692+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:34.753783+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:35.753897+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:36.753993+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:37.754117+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:38.754210+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:39.754307+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75964416 unmapped: 540672 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:40.754424+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:41.754527+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:42.754625+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:43.754727+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:44.754824+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:45.754985+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:46.755158+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:47.755278+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:48.755372+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:49.755487+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75980800 unmapped: 524288 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:50.755639+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75988992 unmapped: 516096 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:51.755753+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 507904 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:52.755866+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 75997184 unmapped: 507904 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:53.755983+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:54.756087+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:55.756188+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:56.756340+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:57.756465+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:58.756581+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:48:59.756703+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76005376 unmapped: 499712 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:00.756816+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:01.756927+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:02.757025+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:03.757121+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:04.757251+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:05.757343+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:06.757443+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:07.757561+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:08.757667+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:09.757762+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:10.757885+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:11.758018+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:12.758156+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:13.758283+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:14.758398+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:15.758504+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:16.758606+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:17.758702+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76021760 unmapped: 483328 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:18.758809+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76038144 unmapped: 466944 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:19.758880+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:20.758988+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:21.759094+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:22.759191+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:23.759292+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:24.759414+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:25.759557+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:26.759662+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:27.759759+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:28.759875+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:29.759989+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:30.760104+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:31.760229+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:32.760346+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76054528 unmapped: 450560 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:33.760543+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 442368 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:34.760705+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 442368 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:35.760833+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 442368 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:36.760969+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76062720 unmapped: 442368 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:37.761062+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 434176 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:38.761210+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76070912 unmapped: 434176 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:39.761354+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 417792 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:40.761475+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 417792 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:41.761660+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 417792 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:42.761788+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76087296 unmapped: 417792 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:43.761878+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:44.762036+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:45.762166+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:46.762261+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:47.762351+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:48.762446+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:49.762537+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:50.762647+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:51.762750+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76103680 unmapped: 401408 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:52.762786+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:53.762884+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:54.762982+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:55.763079+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:56.763178+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:57.763294+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:58.763379+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76111872 unmapped: 393216 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:49:59.763480+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:00.763609+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:01.763730+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:02.763853+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:03.763948+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:04.764059+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:05.764172+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:06.764285+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76128256 unmapped: 376832 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:07.764402+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 368640 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:08.764511+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 368640 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:09.764614+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 368640 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:10.764741+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 368640 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:11.764877+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76136448 unmapped: 368640 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:12.764976+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:13.765079+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:14.765510+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:15.765608+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:16.765718+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:17.765878+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76144640 unmapped: 360448 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:18.765992+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76161024 unmapped: 344064 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:19.766130+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:20.766251+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:21.766402+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:22.766492+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:23.766581+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:24.766711+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76177408 unmapped: 327680 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:25.766875+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 319488 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:26.767015+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 319488 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:27.767127+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76185600 unmapped: 319488 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:28.767223+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:29.767373+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:30.767541+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:31.767657+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:32.767805+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:33.767939+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:34.768100+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:35.768222+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:36.768337+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:37.768465+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76193792 unmapped: 311296 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:38.768687+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 294912 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:39.768810+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76210176 unmapped: 294912 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:40.768950+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:41.769079+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:42.769219+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:43.769363+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:44.769501+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:45.769617+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:46.769727+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:47.769870+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:48.769994+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:49.770118+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76226560 unmapped: 278528 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:50.770257+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:51.770384+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:52.770482+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:53.770613+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:54.770742+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:55.770893+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:56.771020+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:57.771157+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:58.771326+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:50:59.771463+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76234752 unmapped: 270336 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:00.771596+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 253952 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:01.771729+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 253952 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:02.771864+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76251136 unmapped: 253952 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:03.771998+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76259328 unmapped: 245760 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:04.772149+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 237568 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:05.772272+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 237568 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:06.772397+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 237568 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:07.772533+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 237568 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:08.772633+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76267520 unmapped: 237568 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:09.772749+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:10.772869+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:11.772962+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:12.773066+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:13.773172+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:14.773297+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76275712 unmapped: 229376 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:15.773423+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 221184 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:16.773529+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 221184 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:17.773623+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 221184 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:18.773735+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 221184 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:19.773866+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76283904 unmapped: 221184 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:20.773980+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:21.774088+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:22.774207+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:23.774324+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:24.774439+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:25.774560+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:26.774685+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:27.774810+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:28.774904+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:29.775001+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:30.775116+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:31.775242+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:32.775366+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:33.775495+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:34.775614+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:35.775729+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:36.775886+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:37.775997+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76300288 unmapped: 204800 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:38.776121+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 196608 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:39.776257+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76308480 unmapped: 196608 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:40.776373+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:41.776477+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:42.776585+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:43.776739+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:44.776872+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:45.777073+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:46.777179+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:47.777273+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:48.777393+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:49.777509+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:50.777646+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:51.777769+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:52.777879+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:53.777968+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76316672 unmapped: 188416 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:54.778066+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:55.778162+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:56.778289+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:57.778418+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:58.778510+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:51:59.778615+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76324864 unmapped: 180224 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:00.778781+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:01.778902+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:02.779014+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:03.779113+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:04.779239+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:05.779380+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:06.779526+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:07.779663+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:08.779789+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:09.779917+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:10.780071+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:11.780195+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:12.780366+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:13.780481+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:14.780623+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:15.780757+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:16.780890+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:17.780991+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:18.781091+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:19.781206+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76341248 unmapped: 163840 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:20.781320+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:21.781426+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:22.781541+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:23.781657+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:24.781811+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:25.781981+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:26.782148+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:27.782264+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76357632 unmapped: 147456 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:28.782362+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:29.782506+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:30.782657+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:31.782813+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:32.782918+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:33.783023+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:34.783163+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:35.783318+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:36.783432+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:37.783539+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:38.783669+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:39.783826+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:40.783992+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:41.784107+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:42.784223+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:43.784325+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:44.784430+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:45.784579+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:46.784687+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:47.784812+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:48.784898+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:49.785000+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:50.785132+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:51.785257+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:52.785379+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:53.785586+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:54.785773+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:55.785929+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:56.786148+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:57.786306+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:58.786454+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:52:59.786527+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:00.786712+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:01.786894+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:02.787075+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:03.787432+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:04.787572+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:05.787696+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 139264 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:06.787871+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:07.788016+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:08.788202+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:09.788342+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:10.788507+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:11.788650+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:12.788809+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:13.788967+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:14.789099+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:15.789220+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:16.789347+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:17.789508+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76374016 unmapped: 131072 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:18.789661+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 122880 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:19.789778+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 122880 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:20.790008+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 122880 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:21.790116+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 122880 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:22.790286+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76382208 unmapped: 122880 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:23.790432+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:24.790596+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:25.790714+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:26.790906+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:27.791038+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:28.791156+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:29.791301+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:30.791435+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:31.791546+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:32.791660+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:33.791767+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:34.791941+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:35.792063+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:36.792232+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:37.792358+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:38.792505+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:39.792639+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:40.792817+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:41.792994+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:42.793110+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76398592 unmapped: 106496 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:43.793318+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:44.793561+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:45.793663+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:46.793980+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:47.794104+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:48.794208+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:49.794327+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:50.794496+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:51.794618+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:52.794805+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:53.794908+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:54.795013+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:55.795156+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:56.795248+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:57.795350+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76406784 unmapped: 98304 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:58.795490+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 90112 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 838802 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:53:59.795593+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 90112 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:00.795723+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 heartbeat osd_stat(store_statfs(0x4fca8a000/0x0/0x4ffc00000, data 0xf24fe/0x192000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76414976 unmapped: 90112 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:01.795876+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5fc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76423168 unmapped: 81920 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:02.796006+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _renew_subs
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 133 handle_osd_map epochs [134,134], i have 133, src has [1,134]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 497.047088623s of 497.167572021s, submitted: 220
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76439552 unmapped: 65536 heap: 76505088 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:03.796141+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 134 handle_osd_map epochs [134,135], i have 134, src has [1,135]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 16769024 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 929854 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:04.796355+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _renew_subs
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 136 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd75c6d20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76521472 unmapped: 16769024 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 136 heartbeat osd_stat(store_statfs(0x4fbe0e000/0x0/0x4ffc00000, data 0xd68845/0xe0c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:05.796508+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _renew_subs
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 136 handle_osd_map epochs [137,137], i have 136, src has [1,137]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 137 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd75c6f00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:06.796655+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:07.796782+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe08000/0x0/0x4ffc00000, data 0xd6a980/0xe11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:08.796947+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 942313 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe08000/0x0/0x4ffc00000, data 0xd6a980/0xe11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:09.797124+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:10.797322+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:11.797469+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:12.797580+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 137 heartbeat osd_stat(store_statfs(0x4fbe08000/0x0/0x4ffc00000, data 0xd6a980/0xe11000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 137 handle_osd_map epochs [138,138], i have 138, src has [1,138]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.789459229s of 10.829751968s, submitted: 37
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:13.797775+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943207 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:14.797994+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:15.798154+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:16.798325+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:17.798471+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:18.798626+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943207 data_alloc: 218103808 data_used: 57344
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:19.798729+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:20.798913+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:21.799393+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:22.799554+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:23.799659+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943359 data_alloc: 218103808 data_used: 61440
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:24.799809+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:25.799939+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:26.800085+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:27.800247+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:28.800352+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943359 data_alloc: 218103808 data_used: 61440
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:29.800488+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:30.800658+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:31.800818+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:32.801004+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:33.801160+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943359 data_alloc: 218103808 data_used: 61440
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:34.801310+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:35.801429+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 16744448 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:36.801582+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 16744448 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:37.801738+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76546048 unmapped: 16744448 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:38.801898+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16736256 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943359 data_alloc: 218103808 data_used: 61440
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:39.802066+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16736256 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:40.802238+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76554240 unmapped: 16736256 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:41.802368+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 138 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd75c74a0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d47000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 138 ms_handle_reset con 0x55bdd6d47000 session 0x55bdd75c7860
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c8c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 138 ms_handle_reset con 0x55bdd50c8c00 session 0x55bdd75c7a40
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:42.802523+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 76537856 unmapped: 16752640 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5fc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 138 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd75c7c20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 138 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd75c7e00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:43.802680+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 138 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd75dbe00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d47000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 5234688 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973607 data_alloc: 234881024 data_used: 11530240
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:44.802859+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 138 heartbeat osd_stat(store_statfs(0x4fbe07000/0x0/0x4ffc00000, data 0xd6c952/0xe14000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 138 handle_osd_map epochs [139,139], i have 138, src has [1,139]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 138 handle_osd_map epochs [139,139], i have 139, src has [1,139]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.399969101s of 31.402891159s, submitted: 12
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88055808 unmapped: 5234688 heap: 93290496 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:45.802999+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 139 handle_osd_map epochs [139,140], i have 139, src has [1,140]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd6d47000 session 0x55bdd8ddc000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c8800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd50c8800 session 0x55bdd8ddc3c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5fc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd8ddd2c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd8ddda40
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd8ddcd20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 10698752 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:46.803173+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 10698752 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:47.803338+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d47000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd6d47000 session 0x55bdd8de45a0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb53e000/0x0/0x4ffc00000, data 0x1632be0/0x16dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 10698752 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:48.803525+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 140 heartbeat osd_stat(store_statfs(0x4fb53e000/0x0/0x4ffc00000, data 0x1632be0/0x16dd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0c000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd8de4780
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88891392 unmapped: 10698752 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1051961 data_alloc: 234881024 data_used: 11530240
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:49.803690+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5fc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd8de4960
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 140 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd8de4b40
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 10756096 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:50.803904+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 88834048 unmapped: 10756096 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _renew_subs
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:51.804010+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 4874240 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:52.804121+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94715904 unmapped: 4874240 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:53.804259+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fb53a000/0x0/0x4ffc00000, data 0x1634bd5/0x16e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 4857856 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108960 data_alloc: 234881024 data_used: 16879616
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:54.804424+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94732288 unmapped: 4857856 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fb53a000/0x0/0x4ffc00000, data 0x1634bd5/0x16e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:55.804589+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4825088 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:56.804760+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4825088 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:57.804893+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4825088 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:58.805040+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4825088 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1108960 data_alloc: 234881024 data_used: 16879616
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:54:59.805219+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4825088 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fb53a000/0x0/0x4ffc00000, data 0x1634bd5/0x16e1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:00.805431+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 94765056 unmapped: 4825088 heap: 99590144 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.508440018s of 16.597480774s, submitted: 91
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:01.805532+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f97c9000/0x0/0x4ffc00000, data 0x21f8bd5/0x22a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 105594880 unmapped: 4489216 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:02.805665+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:03.805775+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207958 data_alloc: 234881024 data_used: 17833984
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:04.805912+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:05.806132+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:06.806286+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f97a4000/0x0/0x4ffc00000, data 0x222bbd5/0x22d8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:07.806384+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:08.806549+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207054 data_alloc: 234881024 data_used: 17838080
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:09.806642+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104022016 unmapped: 6062080 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:10.806809+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104038400 unmapped: 6045696 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:11.806922+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f97a1000/0x0/0x4ffc00000, data 0x222ebd5/0x22db000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104038400 unmapped: 6045696 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:12.807074+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104038400 unmapped: 6045696 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:13.807224+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 6037504 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207662 data_alloc: 234881024 data_used: 17899520
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:14.807342+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.325393677s of 13.400735855s, submitted: 135
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 6037504 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:15.807473+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f97a0000/0x0/0x4ffc00000, data 0x222fbd5/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104046592 unmapped: 6037504 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:16.807614+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 6029312 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:17.807755+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 6029312 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:18.807891+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f97a0000/0x0/0x4ffc00000, data 0x222fbd5/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 6029312 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1207886 data_alloc: 234881024 data_used: 17899520
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:19.808006+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5ec00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5ec00 session 0x55bdd75db4a0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 104652800 unmapped: 5431296 heap: 110084096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd75db860
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:20.808191+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436b800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436b800 session 0x55bdd75c6d20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436b800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436b800 session 0x55bdd6d63c20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14336000 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:21.808327+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14336000 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:22.808468+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106381312 unmapped: 14336000 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8df4000/0x0/0x4ffc00000, data 0x2bdac37/0x2c88000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:23.808574+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd5d7cd20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106414080 unmapped: 14303232 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1283605 data_alloc: 234881024 data_used: 17903616
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:24.808742+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106422272 unmapped: 14295040 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5ec00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5ec00 session 0x55bdd5d7c3c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:25.808869+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5fc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd6d683c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.297891617s of 11.330360413s, submitted: 36
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd6d68960
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106102784 unmapped: 14614528 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:26.808999+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106102784 unmapped: 14614528 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:27.809131+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436b800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 111288320 unmapped: 9428992 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:28.809231+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8df3000/0x0/0x4ffc00000, data 0x2bdac5a/0x2c89000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 5472256 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1350314 data_alloc: 251658240 data_used: 27828224
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:29.809368+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 5472256 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:30.809479+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 5472256 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:31.809649+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115245056 unmapped: 5472256 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:32.809789+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 5455872 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:33.809900+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 5455872 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1350570 data_alloc: 251658240 data_used: 27832320
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8df2000/0x0/0x4ffc00000, data 0x2bdac5a/0x2c89000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:34.810045+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 5455872 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:35.810190+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115261440 unmapped: 5455872 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 6965 writes, 28K keys, 6965 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 6965 writes, 1430 syncs, 4.87 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 985 writes, 2603 keys, 985 commit groups, 1.0 writes per commit group, ingest: 2.82 MB, 0.00 MB/s
                                           Interval WAL: 985 writes, 447 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.7      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a590#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a590#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.6      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a590#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.000       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55bdd358a9b0#2 capacity: 1.11 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000122231%) FilterBlock(3,0.33 KB,2.82073e-05%) IndexBlock(3,0.34 KB,2.95505e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:36.810340+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115294208 unmapped: 5423104 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:37.810495+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.703389168s of 11.709489822s, submitted: 7
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116432896 unmapped: 4284416 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:38.810642+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116006912 unmapped: 4710400 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1406758 data_alloc: 251658240 data_used: 27897856
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:39.810799+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116006912 unmapped: 4710400 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8755000/0x0/0x4ffc00000, data 0x3272c5a/0x3321000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:40.810973+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 4571136 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:41.811101+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 4571136 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:42.811228+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116146176 unmapped: 4571136 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:43.811330+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8755000/0x0/0x4ffc00000, data 0x3272c5a/0x3321000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 4620288 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1405238 data_alloc: 251658240 data_used: 27901952
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:44.811481+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116097024 unmapped: 4620288 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:45.811619+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 4382720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:46.811771+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116334592 unmapped: 4382720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:47.811904+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.993879318s of 10.153537750s, submitted: 296
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436b800 session 0x55bdd6d69680
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 109699072 unmapped: 11018240 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd6be3e00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:48.812048+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f873a000/0x0/0x4ffc00000, data 0x3293c5a/0x3342000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x417f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218148 data_alloc: 234881024 data_used: 17891328
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:49.812214+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f938e000/0x0/0x4ffc00000, data 0x222fbd5/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:50.812397+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f938e000/0x0/0x4ffc00000, data 0x222fbd5/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:51.812528+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:52.812668+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:53.812778+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f938e000/0x0/0x4ffc00000, data 0x222fbd5/0x22dc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1218148 data_alloc: 234881024 data_used: 17891328
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:54.812892+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:55.813032+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 12206080 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:56.813173+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd8de4f00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5ec00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101974016 unmapped: 18743296 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5ec00 session 0x55bdd7b5cf00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:57.813340+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:58.813492+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007408 data_alloc: 218103808 data_used: 8847360
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:55:59.813645+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:00.813824+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:01.814033+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:02.814208+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:03.814306+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007408 data_alloc: 218103808 data_used: 8847360
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:04.814448+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:05.814557+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:06.814705+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:07.814863+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:08.814996+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1007408 data_alloc: 218103808 data_used: 8847360
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:09.815129+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:10.815240+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:11.815334+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 101998592 unmapped: 18718720 heap: 120717312 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5fc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5fc00 session 0x55bdd6decd20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436b800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436b800 session 0x55bdd6d6fa40
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd6d6a000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:12.815429+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5ec00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5ec00 session 0x55bdd5d7d680
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.631416321s of 24.683015823s, submitted: 93
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd6d6f2c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd4a30780
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd75cf0e0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436b800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436b800 session 0x55bdd4fc1c20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd755fa40
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 102006784 unmapped: 29212672 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:13.815528+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 102006784 unmapped: 29212672 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1095448 data_alloc: 218103808 data_used: 8847360
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:14.815680+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 102006784 unmapped: 29212672 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:15.815804+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5ec00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 102006784 unmapped: 29212672 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5ec00 session 0x55bdd755e5a0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:16.815932+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 102014976 unmapped: 29204480 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:17.816066+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ca0000/0x0/0x4ffc00000, data 0x1920bb2/0x19cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 23977984 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:18.816190+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107241472 unmapped: 23977984 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176388 data_alloc: 234881024 data_used: 21004288
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:19.816352+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ca0000/0x0/0x4ffc00000, data 0x1920bb2/0x19cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107274240 unmapped: 23945216 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ca0000/0x0/0x4ffc00000, data 0x1920bb2/0x19cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:20.816514+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107307008 unmapped: 23912448 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:21.816631+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ca0000/0x0/0x4ffc00000, data 0x1920bb2/0x19cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 23879680 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:22.816781+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 23879680 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:23.816927+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 23879680 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1176388 data_alloc: 234881024 data_used: 21004288
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:24.817072+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 23879680 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:25.817176+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ca0000/0x0/0x4ffc00000, data 0x1920bb2/0x19cc000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107339776 unmapped: 23879680 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:26.817311+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.983060837s of 14.016182899s, submitted: 42
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 117112832 unmapped: 14106624 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:27.817404+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 14589952 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:28.817538+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92d2000/0x0/0x4ffc00000, data 0x22d5bb2/0x2381000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116629504 unmapped: 14589952 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265302 data_alloc: 234881024 data_used: 21835776
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:29.817708+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92d2000/0x0/0x4ffc00000, data 0x22d5bb2/0x2381000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 14557184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:30.817898+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 14557184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:31.818035+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116662272 unmapped: 14557184 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:32.818169+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92d2000/0x0/0x4ffc00000, data 0x22d5bb2/0x2381000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116703232 unmapped: 14516224 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:33.818307+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260078 data_alloc: 234881024 data_used: 21839872
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:34.818446+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:35.818548+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:36.818706+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92e9000/0x0/0x4ffc00000, data 0x22d7bb2/0x2383000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92e9000/0x0/0x4ffc00000, data 0x22d7bb2/0x2383000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:37.818876+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:38.819024+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260078 data_alloc: 234881024 data_used: 21839872
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:39.819155+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.858042717s of 12.936762810s, submitted: 133
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:40.819292+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116727808 unmapped: 14491648 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:41.819413+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92e8000/0x0/0x4ffc00000, data 0x22d8bb2/0x2384000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 14483456 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:42.819558+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 14483456 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:43.819727+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116736000 unmapped: 14483456 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260302 data_alloc: 234881024 data_used: 21839872
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:44.819880+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92e8000/0x0/0x4ffc00000, data 0x22d8bb2/0x2384000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 14442496 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:45.820023+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 14442496 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:46.820150+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116776960 unmapped: 14442496 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:47.820273+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f92e7000/0x0/0x4ffc00000, data 0x22d9bb2/0x2385000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46800 session 0x55bdd755ef00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 14434304 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:48.820385+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436b800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436b800 session 0x55bdd7b5d680
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021126 data_alloc: 218103808 data_used: 8847360
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:49.820521+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:50.820705+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:51.820856+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:52.821022+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:53.821149+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021126 data_alloc: 218103808 data_used: 8847360
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:54.821307+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:55.821438+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:56.821570+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:57.821703+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:58.821858+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021126 data_alloc: 218103808 data_used: 8847360
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:56:59.822004+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:00.822161+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:01.822270+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:02.822420+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:03.822548+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1021126 data_alloc: 218103808 data_used: 8847360
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:04.822696+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:05.822826+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:06.822964+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:07.823095+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa49d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 107716608 unmapped: 23502848 heap: 131219456 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:08.823221+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.193244934s of 29.216753006s, submitted: 42
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd6be32c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5ec00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5ec00 session 0x55bdd4fc3e00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd6e230e0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c000 session 0x55bdd6e62000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c000 session 0x55bdd4fc3c20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106356736 unmapped: 28540928 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101220 data_alloc: 218103808 data_used: 8847360
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:09.823347+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9000 session 0x55bdd4a334a0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436b800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106364928 unmapped: 28532736 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:10.823482+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106364928 unmapped: 28532736 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:11.823612+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106364928 unmapped: 28532736 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:12.823762+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ec7000/0x0/0x4ffc00000, data 0x16f9bb2/0x17a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106364928 unmapped: 28532736 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:13.823896+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106373120 unmapped: 28524544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101220 data_alloc: 218103808 data_used: 8847360
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:14.823997+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106373120 unmapped: 28524544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:15.824100+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106373120 unmapped: 28524544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:16.824231+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106373120 unmapped: 28524544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:17.824365+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ec7000/0x0/0x4ffc00000, data 0x16f9bb2/0x17a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd4fc34a0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:18.824507+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106373120 unmapped: 28524544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f5ec00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f5ec00 session 0x55bdd4fc2780
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd6d46000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd6d46000 session 0x55bdd7b5c3c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.317389488s of 10.355058670s, submitted: 47
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c400 session 0x55bdd74754a0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:19.824658+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 28508160 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1101553 data_alloc: 218103808 data_used: 8847360
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:20.824895+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 106389504 unmapped: 28508160 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:21.825050+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:22.825213+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ec6000/0x0/0x4ffc00000, data 0x16f9bd5/0x17a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:23.825390+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:24.825561+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167805 data_alloc: 234881024 data_used: 18460672
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:25.825705+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:26.825867+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ec6000/0x0/0x4ffc00000, data 0x16f9bd5/0x17a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:27.825970+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:28.826107+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:29.826207+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 110002176 unmapped: 24895488 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1167805 data_alloc: 234881024 data_used: 18460672
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ec6000/0x0/0x4ffc00000, data 0x16f9bd5/0x17a6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.569676399s of 10.576947212s, submitted: 6
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:30.826333+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116875264 unmapped: 18022400 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:31.826425+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 18276352 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:32.826551+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 18251776 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:33.826737+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 18251776 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f949b000/0x0/0x4ffc00000, data 0x2123bd5/0x21d0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:34.826881+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 18251776 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1250407 data_alloc: 234881024 data_used: 18644992
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:35.827016+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116645888 unmapped: 18251776 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:36.827155+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:37.827276+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:38.827423+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:39.827566+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f947a000/0x0/0x4ffc00000, data 0x2145bd5/0x21f2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245743 data_alloc: 234881024 data_used: 18653184
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:40.827688+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:41.827889+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:42.828044+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.912234306s of 12.987756729s, submitted: 133
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:43.828185+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:44.828330+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1245823 data_alloc: 234881024 data_used: 18653184
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9474000/0x0/0x4ffc00000, data 0x214bbd5/0x21f8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:45.828470+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116785152 unmapped: 18112512 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4cc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4cc00 session 0x55bdd5d7fa40
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd7ae1a40
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:46.828623+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 18284544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8ac7000/0x0/0x4ffc00000, data 0x2af7c37/0x2ba5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:47.828750+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 18284544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:48.828881+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116613120 unmapped: 18284544 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:49.829028+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 18276352 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1326222 data_alloc: 234881024 data_used: 18653184
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:50.829265+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 18276352 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:51.829458+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 18276352 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8ac4000/0x0/0x4ffc00000, data 0x2afac37/0x2ba8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:52.829608+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116621312 unmapped: 18276352 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.079881668s of 10.119346619s, submitted: 44
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d400 session 0x55bdd7ae1c20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:53.829749+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 17932288 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:54.829892+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 116965376 unmapped: 17932288 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1332340 data_alloc: 234881024 data_used: 18657280
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4dc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:55.830032+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 8732672 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:56.830161+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 8732672 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8a9f000/0x0/0x4ffc00000, data 0x2b1ec5a/0x2bcd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:57.830297+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 8732672 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:58.830410+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 8732672 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:57:59.830548+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 8732672 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1399068 data_alloc: 251658240 data_used: 28499968
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:00.830682+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 8732672 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:01.830813+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126164992 unmapped: 8732672 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8a9f000/0x0/0x4ffc00000, data 0x2b1ec5a/0x2bcd000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:02.830947+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126222336 unmapped: 8675328 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:03.831074+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126222336 unmapped: 8675328 heap: 134897664 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.117259026s of 11.127627373s, submitted: 16
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:04.831211+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131637248 unmapped: 4317184 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1484586 data_alloc: 251658240 data_used: 28954624
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:05.831376+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 5341184 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:06.831526+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 5341184 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:07.831712+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130613248 unmapped: 5341184 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f7f87000/0x0/0x4ffc00000, data 0x3636c5a/0x36e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f7f87000/0x0/0x4ffc00000, data 0x3636c5a/0x36e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:08.831883+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 5275648 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:09.832034+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130678784 unmapped: 5275648 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1490334 data_alloc: 251658240 data_used: 29265920
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:10.832196+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f7f66000/0x0/0x4ffc00000, data 0x3657c5a/0x3706000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 5120000 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:11.832346+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130834432 unmapped: 5120000 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4dc00 session 0x55bdd75b50e0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d800 session 0x55bdd6d71680
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4cc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4cc00 session 0x55bdd5d7fc20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:12.832486+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124207104 unmapped: 11747328 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f946f000/0x0/0x4ffc00000, data 0x214ebd5/0x21fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:13.832670+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124207104 unmapped: 11747328 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:14.832771+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124207104 unmapped: 11747328 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1260722 data_alloc: 234881024 data_used: 18653184
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.852729797s of 10.956905365s, submitted: 182
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd5d7ed20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f946f000/0x0/0x4ffc00000, data 0x214ebd5/0x21fb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd75b5c20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:15.832922+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:16.833071+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:17.833187+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:18.833327+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:19.833441+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048656 data_alloc: 218103808 data_used: 8847360
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:20.833607+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa849000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:21.833747+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa849000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:22.833887+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:23.833982+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:24.834105+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048656 data_alloc: 218103808 data_used: 8847360
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:25.834226+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa849000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:26.834318+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:27.834474+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:28.834598+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:29.834746+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1048656 data_alloc: 218103808 data_used: 8847360
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:30.834971+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:31.835133+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa849000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:32.835251+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 115597312 unmapped: 20357120 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d400 session 0x55bdd6d6ed20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4dc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4dc00 session 0x55bdd6d62000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd6b883c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4cc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4cc00 session 0x55bdd5d7c780
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:33.835354+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c400 session 0x55bdd6e625a0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118398976 unmapped: 17555456 heap: 135954432 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd73f2000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd73f2000 session 0x55bdd7ae0f00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.988206863s of 19.015766144s, submitted: 47
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269000 session 0x55bdd6d6b2c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269000 session 0x55bdd6e225a0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd75ced20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd73f2000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd73f2000 session 0x55bdd75cef00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c400 session 0x55bdd6d6af00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:34.835491+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 24526848 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114877 data_alloc: 234881024 data_used: 11534336
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:35.835637+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 24526848 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0de000/0x0/0x4ffc00000, data 0x14e2b60/0x158e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:36.835768+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 24526848 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:37.835886+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 24526848 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:38.835987+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 24526848 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:39.836093+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118915072 unmapped: 24526848 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1114877 data_alloc: 234881024 data_used: 11534336
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4cc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4cc00 session 0x55bdd75cf680
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:40.836331+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 118931456 unmapped: 24510464 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:41.836427+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 119881728 unmapped: 23560192 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0dd000/0x0/0x4ffc00000, data 0x14e2b83/0x158f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:42.836542+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 22921216 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:43.836640+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 22921216 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:44.836749+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 22921216 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170027 data_alloc: 234881024 data_used: 19238912
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:45.836861+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 22921216 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0dd000/0x0/0x4ffc00000, data 0x14e2b83/0x158f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:46.836963+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 22921216 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:47.837087+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120520704 unmapped: 22921216 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0dd000/0x0/0x4ffc00000, data 0x14e2b83/0x158f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:48.837232+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 22913024 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0dd000/0x0/0x4ffc00000, data 0x14e2b83/0x158f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:49.837358+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 22913024 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1170027 data_alloc: 234881024 data_used: 19238912
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:50.837501+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120528896 unmapped: 22913024 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:51.837664+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.775194168s of 17.811866760s, submitted: 39
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120553472 unmapped: 22888448 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:52.837884+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 16539648 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:53.838075+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9579000/0x0/0x4ffc00000, data 0x2038b83/0x20e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 16539648 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:54.838264+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 16539648 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267201 data_alloc: 234881024 data_used: 19771392
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:55.838443+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 16539648 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:56.838577+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 16539648 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:57.838763+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126902272 unmapped: 16539648 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:58.838925+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 16531456 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9579000/0x0/0x4ffc00000, data 0x2038b83/0x20e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:58:59.839069+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126910464 unmapped: 16531456 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267201 data_alloc: 234881024 data_used: 19771392
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9579000/0x0/0x4ffc00000, data 0x2038b83/0x20e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:00.839227+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126918656 unmapped: 16523264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:01.839323+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126918656 unmapped: 16523264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:02.839504+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126918656 unmapped: 16523264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:03.839639+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126918656 unmapped: 16523264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:04.839821+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126918656 unmapped: 16523264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267201 data_alloc: 234881024 data_used: 19771392
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:05.840003+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126918656 unmapped: 16523264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9579000/0x0/0x4ffc00000, data 0x2038b83/0x20e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:06.840116+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126926848 unmapped: 16515072 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9579000/0x0/0x4ffc00000, data 0x2038b83/0x20e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:07.840543+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9579000/0x0/0x4ffc00000, data 0x2038b83/0x20e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126926848 unmapped: 16515072 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:08.840667+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126935040 unmapped: 16506880 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:09.840803+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126935040 unmapped: 16506880 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267353 data_alloc: 234881024 data_used: 19775488
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:10.841008+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126935040 unmapped: 16506880 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:11.841162+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126935040 unmapped: 16506880 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268800 session 0x55bdd72741e0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268c00 session 0x55bdd6d6af00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd75c6960
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7ba6400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7ba6400 session 0x55bdd75b41e0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7ba7c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.414363861s of 20.486953735s, submitted: 115
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7ba7c00 session 0x55bdd4a30780
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9579000/0x0/0x4ffc00000, data 0x2038b83/0x20e5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268800 session 0x55bdd75b5c20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:12.841284+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 18423808 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:13.841404+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 18423808 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:14.841952+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 18423808 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1316636 data_alloc: 234881024 data_used: 19775488
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:15.842087+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 18423808 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268c00 session 0x55bdd7220960
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:16.842238+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd4fc21e0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125001728 unmapped: 18440192 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7ba6400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7ba6400 session 0x55bdd7b5cd20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c400 session 0x55bdd7b5d0e0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:17.842341+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125018112 unmapped: 18423808 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8dd2000/0x0/0x4ffc00000, data 0x27ecbe5/0x289a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:18.842440+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127369216 unmapped: 16072704 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:19.842585+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14262272 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1363761 data_alloc: 234881024 data_used: 26726400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:20.842736+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14262272 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:21.842919+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14262272 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:22.843100+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14262272 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.219516754s of 11.251511574s, submitted: 40
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:23.843198+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14262272 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8dd2000/0x0/0x4ffc00000, data 0x27ecbe5/0x289a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:24.843316+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14262272 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1364569 data_alloc: 234881024 data_used: 26726400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:25.843417+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129179648 unmapped: 14262272 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8dd0000/0x0/0x4ffc00000, data 0x27edbe5/0x289b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:26.843551+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129196032 unmapped: 14245888 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:27.843653+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 129523712 unmapped: 13918208 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:28.843811+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134766592 unmapped: 8675328 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:29.843931+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136036352 unmapped: 7405568 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1458543 data_alloc: 251658240 data_used: 27844608
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8426000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:30.844062+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136077312 unmapped: 7364608 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:31.844112+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136077312 unmapped: 7364608 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:32.844246+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136085504 unmapped: 7356416 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:33.844376+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136085504 unmapped: 7356416 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:34.844463+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136085504 unmapped: 7356416 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1458543 data_alloc: 251658240 data_used: 27844608
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.565647125s of 11.631405830s, submitted: 106
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:35.844587+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136085504 unmapped: 7356416 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8426000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:36.844661+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136118272 unmapped: 7323648 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:37.844722+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8426000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136159232 unmapped: 7282688 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:38.844896+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:39.845033+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451543 data_alloc: 251658240 data_used: 27832320
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:40.845175+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:41.845311+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:42.845403+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8435000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:43.845554+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:44.845711+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451543 data_alloc: 251658240 data_used: 27832320
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:45.845865+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136388608 unmapped: 7053312 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:46.845959+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:47.846084+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:48.846183+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8435000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8435000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:49.846305+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451543 data_alloc: 251658240 data_used: 27832320
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:50.846408+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:51.846518+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:52.846620+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8435000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:53.846769+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136396800 unmapped: 7045120 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:54.846864+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451543 data_alloc: 251658240 data_used: 27832320
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:55.846976+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:56.847085+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:57.847223+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:58.847357+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8435000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T09:59:59.847484+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.583915710s of 24.589715958s, submitted: 15
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451847 data_alloc: 251658240 data_used: 27832320
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8435000/0x0/0x4ffc00000, data 0x3189be5/0x3237000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:00.847591+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8433000/0x0/0x4ffc00000, data 0x318abe5/0x3238000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:01.847696+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136404992 unmapped: 7036928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:02.847804+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 7028736 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:03.847973+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 7028736 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8433000/0x0/0x4ffc00000, data 0x318abe5/0x3238000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:04.848107+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 7028736 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451847 data_alloc: 251658240 data_used: 27832320
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:05.848208+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136413184 unmapped: 7028736 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:06.848340+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 7020544 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:07.848486+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136421376 unmapped: 7020544 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:08.848582+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136429568 unmapped: 7012352 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8433000/0x0/0x4ffc00000, data 0x318abe5/0x3238000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:09.848681+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136462336 unmapped: 6979584 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1451847 data_alloc: 251658240 data_used: 27832320
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:10.848796+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 6971392 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8433000/0x0/0x4ffc00000, data 0x318abe5/0x3238000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:11.848896+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 6971392 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:12.849002+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 6971392 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:13.849116+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 6971392 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:14.849257+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8433000/0x0/0x4ffc00000, data 0x318abe5/0x3238000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.233993530s of 15.235481262s, submitted: 1
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 6971392 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1449663 data_alloc: 251658240 data_used: 27832320
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:15.849371+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 6971392 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268800 session 0x55bdd5d7d2c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268c00 session 0x55bdd5d7cf00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:16.849472+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd7b5d2c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131284992 unmapped: 12156928 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:17.849571+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f946d000/0x0/0x4ffc00000, data 0x2039b83/0x20e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 12427264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:18.849683+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 12427264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f946d000/0x0/0x4ffc00000, data 0x2039b83/0x20e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:19.849813+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 12427264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1269843 data_alloc: 234881024 data_used: 19759104
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:20.850003+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 12427264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:21.850100+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f946d000/0x0/0x4ffc00000, data 0x2039b83/0x20e6000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131014656 unmapped: 12427264 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd75cfe00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269000 session 0x55bdd7275c20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd7ae10e0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:22.850198+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:23.850305+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:24.850401+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074339 data_alloc: 234881024 data_used: 11534336
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:25.850511+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:26.850641+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:27.850772+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:28.850873+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:29.850995+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074339 data_alloc: 234881024 data_used: 11534336
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:30.851140+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:31.851270+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:32.851351+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:33.851468+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:34.851634+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1074339 data_alloc: 234881024 data_used: 11534336
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:35.851774+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:36.851913+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:37.852048+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125632512 unmapped: 17809408 heap: 143441920 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:38.852179+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268800 session 0x55bdd72745a0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268c00 session 0x55bdd7475e00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd6f7d2c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7ba6400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7ba6400 session 0x55bdd75ce000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 23.875421524s of 23.928354263s, submitted: 95
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd75cf860
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268800 session 0x55bdd6dec000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125640704 unmapped: 28368896 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268c00 session 0x55bdd755e000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd755e5a0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c000 session 0x55bdd755e3c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:39.852311+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125648896 unmapped: 28360704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1135665 data_alloc: 234881024 data_used: 11534336
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:40.852456+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125648896 unmapped: 28360704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c000 session 0x55bdd755e780
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd436bc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd436bc00 session 0x55bdd6d6f2c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:41.852586+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125648896 unmapped: 28360704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268800 session 0x55bdd6d6fc20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7268c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7268c00 session 0x55bdd6d6ef00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0e3000/0x0/0x4ffc00000, data 0x14ddbb2/0x1589000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:42.852728+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125698048 unmapped: 28311552 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:43.852883+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 28270592 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:44.852992+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 28270592 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1191655 data_alloc: 234881024 data_used: 19226624
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:45.853113+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0e2000/0x0/0x4ffc00000, data 0x14ddbd5/0x158a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 28270592 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa0e2000/0x0/0x4ffc00000, data 0x14ddbd5/0x158a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:46.853245+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 125739008 unmapped: 28270592 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd6d6f0e0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c800 session 0x55bdd6f7c780
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:47.853348+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd6dec960
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:48.853497+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:49.853638+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083131 data_alloc: 234881024 data_used: 11534336
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:50.853799+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:51.853907+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:52.854034+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:53.854166+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:54.854304+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1083131 data_alloc: 234881024 data_used: 11534336
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:55.854465+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:56.854594+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:57.854735+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 123256832 unmapped: 30752768 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd4e094a0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c9c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd7221860
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0c000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd6d6b2c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0c000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd75da780
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c9c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.361354828s of 19.447809219s, submitted: 108
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd75cf680
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd7ae1860
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c800 session 0x55bdd6be23c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd6be2b40
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd6be3e00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:58.854888+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31956992 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84d000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:00:59.855023+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31956992 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1116223 data_alloc: 234881024 data_used: 11534336
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:00.855184+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0c000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd6dec000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31956992 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c9c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd6ded0e0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:01.855331+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd6dec5a0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122052608 unmapped: 31956992 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c800 session 0x55bdd4a30780
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0c000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:02.855425+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31875072 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:03.855518+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa45f000/0x0/0x4ffc00000, data 0x1160b83/0x120d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31875072 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:04.855665+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa45f000/0x0/0x4ffc00000, data 0x1160b83/0x120d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c800 session 0x55bdd6d6a3c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd7275e00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122134528 unmapped: 31875072 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146352 data_alloc: 234881024 data_used: 15314944
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c9c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd4a33a40
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:05.855811+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:06.855973+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:07.856077+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:08.856253+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:09.856423+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088438 data_alloc: 234881024 data_used: 11534336
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:10.856604+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:11.856770+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:12.856940+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:13.857079+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:14.857201+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088438 data_alloc: 234881024 data_used: 11534336
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:15.857334+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:16.857441+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:17.857595+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:18.857718+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:19.857835+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1088438 data_alloc: 234881024 data_used: 11534336
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:20.858059+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa84e000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:21.858190+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 120848384 unmapped: 33161216 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:22.858337+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 24.317052841s of 24.347005844s, submitted: 33
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd6be2b40
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd6dec000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 32071680 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa315000/0x0/0x4ffc00000, data 0x12abbb2/0x1357000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:23.858496+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 32071680 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:24.858636+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 32071680 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136711 data_alloc: 234881024 data_used: 11534336
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:25.858773+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 32071680 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd7ae1860
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:26.858886+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 32071680 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:27.859021+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0c000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 121937920 unmapped: 32071680 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:28.859151+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 31678464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa315000/0x0/0x4ffc00000, data 0x12abbb2/0x1357000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:29.859282+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 31678464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169087 data_alloc: 234881024 data_used: 15511552
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa315000/0x0/0x4ffc00000, data 0x12abbb2/0x1357000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:30.859438+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 31678464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:31.859547+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 31678464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:32.859699+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 31678464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:33.859859+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 31678464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:34.859964+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122331136 unmapped: 31678464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1169087 data_alloc: 234881024 data_used: 15511552
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:35.860083+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4fa315000/0x0/0x4ffc00000, data 0x12abbb2/0x1357000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122339328 unmapped: 31670272 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:36.860216+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 122339328 unmapped: 31670272 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.554158211s of 14.583094597s, submitted: 35
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:37.860308+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f90f1000/0x0/0x4ffc00000, data 0x20bfbb2/0x216b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132595712 unmapped: 21413888 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:38.860425+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 23248896 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:39.860546+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130760704 unmapped: 23248896 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1279137 data_alloc: 234881024 data_used: 15622144
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:40.860712+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 23240704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:41.860818+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 23240704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:42.860916+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9062000/0x0/0x4ffc00000, data 0x214ebb2/0x21fa000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 23240704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:43.861049+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 23240704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:44.861170+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130768896 unmapped: 23240704 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1278745 data_alloc: 234881024 data_used: 15638528
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:45.861290+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9041000/0x0/0x4ffc00000, data 0x216fbb2/0x221b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 23232512 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:46.861428+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 23232512 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:47.861528+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 23232512 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.235909462s of 11.308976173s, submitted: 148
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:48.861646+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd6d6b2c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 130777088 unmapped: 23232512 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c9c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd75b4b40
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:49.861754+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 27590656 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097307 data_alloc: 234881024 data_used: 9961472
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cef000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:50.861891+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 27590656 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:51.861997+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cef000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 27590656 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:52.862118+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 27590656 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:53.862370+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 27590656 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:54.862510+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 27590656 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097307 data_alloc: 234881024 data_used: 9961472
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:55.862663+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126418944 unmapped: 27590656 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:56.862795+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:57.862951+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cef000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:58.863107+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:01:59.863262+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097307 data_alloc: 234881024 data_used: 9961472
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:00.863400+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:01.863526+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:02.863642+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9cef000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:03.863777+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:04.863893+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 126427136 unmapped: 27582464 heap: 154009600 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1097307 data_alloc: 234881024 data_used: 9961472
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd6d743c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4c800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4c800 session 0x55bdd6e221e0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0c000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd6d6e780
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:05.864039+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c9c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd6d6eb40
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.061567307s of 17.076330185s, submitted: 25
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd4a301e0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd74752c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0d400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0d400 session 0x55bdd4a305a0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0c000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd75b5c20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c9c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd75cfc20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 33579008 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:06.864181+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ad3000/0x0/0x4ffc00000, data 0x16ddb60/0x1789000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 33579008 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:07.864296+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 33579008 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:08.864429+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124633088 unmapped: 33579008 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd6decb40
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:09.864558+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ad3000/0x0/0x4ffc00000, data 0x16ddb60/0x1789000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd7b5c780
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c2000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2000 session 0x55bdd58a8f00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124624896 unmapped: 33587200 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1172702 data_alloc: 234881024 data_used: 9961472
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0c000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd6e23a40
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:10.864749+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd50c9c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 124657664 unmapped: 33554432 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:11.864874+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127950848 unmapped: 30261248 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:12.865018+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127950848 unmapped: 30261248 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:13.865152+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127950848 unmapped: 30261248 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:14.865297+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ad2000/0x0/0x4ffc00000, data 0x16ddb70/0x178a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127950848 unmapped: 30261248 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234717 data_alloc: 234881024 data_used: 18628608
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:15.865442+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127950848 unmapped: 30261248 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:16.865571+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ad2000/0x0/0x4ffc00000, data 0x16ddb70/0x178a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127950848 unmapped: 30261248 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:17.865713+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd4a330e0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c2400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2400 session 0x55bdd75ce3c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 127959040 unmapped: 30253056 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c2800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2800 session 0x55bdd5d7f860
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c2c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2c00 session 0x55bdd72214a0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd4f0c000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.769443512s of 12.800458908s, submitted: 35
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd4f0c000 session 0x55bdd7ae0f00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd8d4d000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd8d4d000 session 0x55bdd6d6b860
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c2400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2400 session 0x55bdd7474b40
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:18.865812+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c2800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2800 session 0x55bdd74750e0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3000 session 0x55bdd4fc23c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f9ad2000/0x0/0x4ffc00000, data 0x16ddb70/0x178a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 29548544 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:19.865934+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 128663552 unmapped: 29548544 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1319022 data_alloc: 234881024 data_used: 18628608
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:20.866109+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133038080 unmapped: 25174016 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:21.866245+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c2800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131661824 unmapped: 26550272 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2800 session 0x55bdd6e22d20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f87c8000/0x0/0x4ffc00000, data 0x29e5b80/0x2a93000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:22.866375+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 131686400 unmapped: 26525696 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:23.866502+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:24.866650+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1467181 data_alloc: 251658240 data_used: 29065216
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:25.866789+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:26.866903+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:27.867070+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f87a3000/0x0/0x4ffc00000, data 0x2a09ba3/0x2ab8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:28.867193+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:29.867322+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1467181 data_alloc: 251658240 data_used: 29065216
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:30.867464+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f87a3000/0x0/0x4ffc00000, data 0x2a09ba3/0x2ab8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:31.867574+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f87a3000/0x0/0x4ffc00000, data 0x2a09ba3/0x2ab8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 140550144 unmapped: 17661952 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:32.867721+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.509668350s of 14.591269493s, submitted: 121
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147144704 unmapped: 11067392 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:33.867872+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 10510336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:34.868039+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 10510336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1566109 data_alloc: 251658240 data_used: 29638656
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:35.868178+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 10510336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6a2a000/0x0/0x4ffc00000, data 0x35e3ba3/0x3692000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:36.868299+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 10510336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:37.868435+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 10510336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:38.868556+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 10510336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:39.868686+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6a2a000/0x0/0x4ffc00000, data 0x35e3ba3/0x3692000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147701760 unmapped: 10510336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1566125 data_alloc: 251658240 data_used: 29638656
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6a2a000/0x0/0x4ffc00000, data 0x35e3ba3/0x3692000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:40.868871+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147734528 unmapped: 10477568 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:41.869014+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147734528 unmapped: 10477568 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:42.869161+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6a2a000/0x0/0x4ffc00000, data 0x35e3ba3/0x3692000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 10444800 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:43.869306+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 10444800 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:44.869449+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 10444800 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 1566125 data_alloc: 251658240 data_used: 29638656
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:45.869626+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 10444800 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:46.869758+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 10444800 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6a2a000/0x0/0x4ffc00000, data 0x35e3ba3/0x3692000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:47.869893+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 147767296 unmapped: 10444800 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:48.870041+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.815006256s of 15.883452415s, submitted: 119
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3400 session 0x55bdd6d75c20
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3800 session 0x55bdd7274000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3c00 session 0x55bdd4e09680
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 16498688 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:49.870160+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 16498688 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1323675 data_alloc: 234881024 data_used: 18726912
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:50.870313+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 16498688 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:51.870462+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 141713408 unmapped: 16498688 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:52.870619+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd50c9c00 session 0x55bdd57dfe00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd57df680
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c2800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2800 session 0x55bdd58a9680
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8031000/0x0/0x4ffc00000, data 0x1fddb70/0x208a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:53.870901+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:54.871056+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128948 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:55.871206+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:56.871361+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:57.871522+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:58.871711+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:02:59.871861+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128948 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:00.872056+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:01.872162+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:02.872325+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:03.872428+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:04.872551+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128948 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:05.872662+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:06.872798+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:07.872924+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:08.873061+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:09.873229+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f8f94000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132210688 unmapped: 26001408 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1128948 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:10.873422+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.839834213s of 21.894390106s, submitted: 91
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3400 session 0x55bdd6d70960
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3800 session 0x55bdd4a30780
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3c00 session 0x55bdd6f7c5a0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3c00 session 0x55bdd7ae12c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd75b52c0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132218880 unmapped: 25993216 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:11.873553+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132218880 unmapped: 25993216 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:12.873726+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c2800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2800 session 0x55bdd6e230e0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3400 session 0x55bdd7b5c960
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132218880 unmapped: 25993216 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:13.873873+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3800 session 0x55bdd7b5d680
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c3800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c3800 session 0x55bdd4e085a0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd7269c00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd94c2800
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 132161536 unmapped: 26050560 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:14.874065+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f88fd000/0x0/0x4ffc00000, data 0x1713b73/0x17bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133365760 unmapped: 24846336 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1264737 data_alloc: 234881024 data_used: 18120704
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:15.874214+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:16.874366+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:17.874519+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f88fd000/0x0/0x4ffc00000, data 0x1713b73/0x17bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:18.874670+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:19.874816+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1275377 data_alloc: 234881024 data_used: 19714048
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:20.874980+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:21.875093+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f88fd000/0x0/0x4ffc00000, data 0x1713b73/0x17bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x5b3f9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:22.875253+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 133758976 unmapped: 24453120 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:23.875423+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.853632927s of 12.886682510s, submitted: 37
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142090240 unmapped: 16121856 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:24.875575+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6fd9000/0x0/0x4ffc00000, data 0x1e97b73/0x1f43000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346571 data_alloc: 234881024 data_used: 20361216
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:25.875717+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:26.875874+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:27.876048+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:28.876194+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:29.876352+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345475 data_alloc: 234881024 data_used: 20361216
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:30.876530+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6f16000/0x0/0x4ffc00000, data 0x1f5ab73/0x2006000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:31.876660+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6f16000/0x0/0x4ffc00000, data 0x1f5ab73/0x2006000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:32.876821+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:33.877011+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f6f16000/0x0/0x4ffc00000, data 0x1f5ab73/0x2006000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 142680064 unmapped: 15532032 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:34.877147+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.167773247s of 11.241725922s, submitted: 103
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd7269c00 session 0x55bdd4e08960
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd94c2800 session 0x55bdd755e000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: handle_auth_request added challenge on 0x55bdd745cc00
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 ms_handle_reset con 0x55bdd745cc00 session 0x55bdd5d7f0e0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:35.877260+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:36.877376+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:37.877532+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:38.877785+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:39.877969+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:40.878150+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:41.878263+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:42.878407+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:43.878565+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:44.878677+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:45.878820+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:46.878930+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:47.879086+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:48.879266+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:49.879412+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:50.879586+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:51.879689+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:52.879888+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:53.880055+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:54.880177+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:55.880336+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:56.880468+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:57.880627+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:58.880774+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:03:59.880928+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:00.881135+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134791168 unmapped: 23420928 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:01.881251+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134799360 unmapped: 23412736 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:02.881366+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134799360 unmapped: 23412736 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:03.881546+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134799360 unmapped: 23412736 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:04.881694+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:05.881813+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134799360 unmapped: 23412736 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:06.881898+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134807552 unmapped: 23404544 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:07.882021+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134807552 unmapped: 23404544 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:08.882187+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134807552 unmapped: 23404544 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:09.882319+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134807552 unmapped: 23404544 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:10.882485+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134815744 unmapped: 23396352 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:11.882582+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134815744 unmapped: 23396352 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:12.882693+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134815744 unmapped: 23396352 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:13.882796+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134815744 unmapped: 23396352 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:14.882937+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 23388160 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:15.883106+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 23388160 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:16.883211+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 23388160 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:17.883321+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 23388160 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:18.883446+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 23388160 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:19.883554+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 23388160 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:20.883743+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134823936 unmapped: 23388160 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:21.883897+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134832128 unmapped: 23379968 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:22.884048+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134832128 unmapped: 23379968 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:23.884186+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134832128 unmapped: 23379968 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:24.884359+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134832128 unmapped: 23379968 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:25.884485+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134832128 unmapped: 23379968 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:26.884584+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 23363584 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:27.884685+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 23363584 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:28.884793+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 23363584 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:29.884882+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 23363584 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:30.885005+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 23363584 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:31.885105+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 23363584 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:32.885209+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134848512 unmapped: 23363584 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'config diff' '{prefix=config diff}'
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'config show' '{prefix=config show}'
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:33.885330+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'counter dump' '{prefix=counter dump}'
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134610944 unmapped: 23601152 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'counter schema' '{prefix=counter schema}'
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:34.885432+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134594560 unmapped: 23617536 heap: 158212096 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:35.885544+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'log dump' '{prefix=log dump}'
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 145776640 unmapped: 23478272 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'perf dump' '{prefix=perf dump}'
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'perf schema' '{prefix=perf schema}'
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:36.885637+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134905856 unmapped: 34349056 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:37.885742+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134905856 unmapped: 34349056 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:38.885867+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134905856 unmapped: 34349056 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:39.885987+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134905856 unmapped: 34349056 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:40.886116+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134905856 unmapped: 34349056 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:41.886217+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134905856 unmapped: 34349056 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:42.886319+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134922240 unmapped: 34332672 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:43.886420+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134922240 unmapped: 34332672 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:44.886525+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134930432 unmapped: 34324480 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:45.886635+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134889472 unmapped: 34365440 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:46.886738+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134889472 unmapped: 34365440 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:47.886864+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134889472 unmapped: 34365440 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:48.886980+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134889472 unmapped: 34365440 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:49.887094+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134889472 unmapped: 34365440 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:50.887224+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134905856 unmapped: 34349056 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:51.887321+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134905856 unmapped: 34349056 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:52.887428+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134905856 unmapped: 34349056 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:53.887538+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134905856 unmapped: 34349056 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:54.887644+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134905856 unmapped: 34349056 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:55.887756+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134905856 unmapped: 34349056 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:56.888380+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134905856 unmapped: 34349056 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:57.888489+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134905856 unmapped: 34349056 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:58.888589+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134914048 unmapped: 34340864 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:04:59.888744+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134914048 unmapped: 34340864 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:00.888919+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134914048 unmapped: 34340864 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:01.889069+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134914048 unmapped: 34340864 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:02.889193+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134914048 unmapped: 34340864 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:03.889362+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134914048 unmapped: 34340864 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:04.889475+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134914048 unmapped: 34340864 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:05.889595+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134914048 unmapped: 34340864 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:06.889728+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134930432 unmapped: 34324480 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:07.889870+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134930432 unmapped: 34324480 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:08.889999+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134930432 unmapped: 34324480 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:09.890137+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134930432 unmapped: 34324480 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:10.890306+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134930432 unmapped: 34324480 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:11.890411+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134930432 unmapped: 34324480 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:12.890540+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134930432 unmapped: 34324480 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:13.890657+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134930432 unmapped: 34324480 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:14.890782+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134938624 unmapped: 34316288 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:15.890893+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134946816 unmapped: 34308096 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:16.891023+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134946816 unmapped: 34308096 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:17.891155+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134946816 unmapped: 34308096 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:18.891295+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134946816 unmapped: 34308096 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:19.891426+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134946816 unmapped: 34308096 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:20.891586+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134946816 unmapped: 34308096 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:21.891700+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134946816 unmapped: 34308096 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:22.891810+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134955008 unmapped: 34299904 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:23.891944+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134955008 unmapped: 34299904 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:24.892081+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134955008 unmapped: 34299904 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:25.892211+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134955008 unmapped: 34299904 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:26.892359+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134955008 unmapped: 34299904 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:27.892463+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134955008 unmapped: 34299904 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:28.892598+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134955008 unmapped: 34299904 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:29.892705+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134955008 unmapped: 34299904 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:30.892863+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134963200 unmapped: 34291712 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:31.892976+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134963200 unmapped: 34291712 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:32.893121+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134963200 unmapped: 34291712 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:33.893254+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134963200 unmapped: 34291712 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:34.893400+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134971392 unmapped: 34283520 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:35.893512+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134971392 unmapped: 34283520 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1800.0 total, 600.0 interval
                                           Cumulative writes: 11K writes, 43K keys, 11K commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s
                                           Cumulative WAL: 11K writes, 3279 syncs, 3.42 writes per sync, written: 0.04 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 4248 writes, 15K keys, 4248 commit groups, 1.0 writes per commit group, ingest: 18.47 MB, 0.03 MB/s
                                           Interval WAL: 4248 writes, 1849 syncs, 2.30 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:36.893638+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134971392 unmapped: 34283520 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:37.893779+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134971392 unmapped: 34283520 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:38.893887+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 34267136 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:39.894008+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 34267136 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:40.894198+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 34267136 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141455 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:41.894351+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 34267136 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:42.894475+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 34267136 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:43.894646+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 34267136 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:44.894814+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 134987776 unmapped: 34267136 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80fe000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 130.458419800s of 130.484359741s, submitted: 52
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:45.894940+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 135012352 unmapped: 34242560 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:46.895087+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 32972800 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:47.895477+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 32972800 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:48.895620+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 32972800 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:49.895813+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 32972800 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:50.896090+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 32972800 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:51.896254+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 32972800 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:52.896388+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 32972800 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:53.896528+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 32972800 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:54.896687+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 32972800 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:55.896835+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136282112 unmapped: 32972800 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:56.897028+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 32964608 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:57.897183+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 32964608 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:58.897320+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 32964608 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:05:59.897449+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 32964608 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:00.897604+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 32964608 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:01.898025+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 32964608 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:02.898165+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 32964608 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:03.898325+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 32964608 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:04.898498+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 32964608 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:05.898680+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 32964608 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:06.898810+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 32964608 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:07.898930+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 32964608 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:08.899062+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136290304 unmapped: 32964608 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:09.899201+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 32956416 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:10.899373+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 32956416 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:11.899543+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136298496 unmapped: 32956416 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:12.899683+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136306688 unmapped: 32948224 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:13.899823+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136306688 unmapped: 32948224 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:14.899928+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136306688 unmapped: 32948224 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:15.900093+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136306688 unmapped: 32948224 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:16.900234+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136306688 unmapped: 32948224 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:17.900362+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136306688 unmapped: 32948224 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:18.900492+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 32940032 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:19.900627+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136314880 unmapped: 32940032 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: mgrc ms_handle_reset ms_handle_reset con 0x55bdd52e6000
Oct 09 10:10:51 compute-2 ceph-osd[11347]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3631142817
Oct 09 10:10:51 compute-2 ceph-osd[11347]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3631142817,v1:192.168.122.100:6801/3631142817]
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: get_auth_request con 0x55bdd50c9c00 auth_method 0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: mgrc handle_mgr_configure stats_period=5
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:20.900763+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 32784384 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:21.900888+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 32784384 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:22.901053+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 32784384 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:23.901204+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 32784384 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:24.901348+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 32784384 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:25.901821+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 32784384 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:26.902185+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 32784384 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:27.902345+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 32784384 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:28.902475+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 32784384 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:29.902610+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 32784384 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:30.902828+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 32784384 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:31.903049+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 32784384 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:32.903227+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136470528 unmapped: 32784384 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:33.903353+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 32776192 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:34.903520+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136478720 unmapped: 32776192 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:35.903768+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136486912 unmapped: 32768000 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:36.903940+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136486912 unmapped: 32768000 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:37.904126+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136486912 unmapped: 32768000 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:38.904306+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136486912 unmapped: 32768000 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:39.904475+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136486912 unmapped: 32768000 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:40.904641+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136486912 unmapped: 32768000 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:41.904808+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 32759808 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:42.904958+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 32759808 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:43.905060+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 32759808 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:44.905200+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 32759808 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:45.905329+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 32759808 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:46.905481+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 32759808 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:47.905637+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 32759808 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:48.905782+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 32759808 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:49.905942+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 32759808 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:50.906119+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 32759808 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:51.906286+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 32759808 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:52.906453+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 32759808 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:53.906637+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 32759808 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:54.906803+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 32759808 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:55.906937+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 32759808 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:56.907056+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136495104 unmapped: 32759808 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:57.907161+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 32751616 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:58.907337+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 32751616 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:06:59.907446+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 32751616 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:00.907647+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 32751616 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:01.907820+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 32751616 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:02.907974+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 32751616 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:03.908099+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 32751616 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:04.908217+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 32751616 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:05.908349+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 32751616 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:06.908473+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 32751616 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:07.908618+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 32751616 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:08.908729+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 32751616 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:09.908892+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136503296 unmapped: 32751616 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:10.909098+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136511488 unmapped: 32743424 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:11.909276+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136511488 unmapped: 32743424 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:12.909444+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136511488 unmapped: 32743424 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:13.909621+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136511488 unmapped: 32743424 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:14.909761+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136511488 unmapped: 32743424 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:15.909923+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136511488 unmapped: 32743424 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:16.910056+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136511488 unmapped: 32743424 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:17.910175+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136511488 unmapped: 32743424 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:18.910300+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136511488 unmapped: 32743424 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:19.910409+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136511488 unmapped: 32743424 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:20.910604+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136511488 unmapped: 32743424 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:21.910773+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136519680 unmapped: 32735232 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:22.910893+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136519680 unmapped: 32735232 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:23.911003+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136519680 unmapped: 32735232 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:24.911148+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 32727040 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:25.911312+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 32727040 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:26.911487+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 32727040 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:27.911622+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 32727040 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:28.911785+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 32727040 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:29.911922+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 32727040 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:30.912138+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 32727040 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:31.912304+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 32727040 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:32.912472+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 32727040 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:33.912607+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 32727040 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:34.912781+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 32727040 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:35.912892+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 32727040 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:36.913021+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 32727040 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:37.913172+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 32727040 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:38.913336+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 32727040 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:39.913457+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 32727040 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:40.913646+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 32727040 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:41.913792+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 32727040 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:42.913922+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 32727040 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:43.914087+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 32727040 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:44.914285+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136527872 unmapped: 32727040 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:45.914455+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:46.914612+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:47.914757+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:48.914909+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:49.915055+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:50.915227+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:51.915364+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:52.915512+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:53.915657+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:54.915780+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:55.915935+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:56.916044+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:57.916155+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:58.916269+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:07:59.916411+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:00.916596+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:01.916743+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:02.916888+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:03.917023+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:04.917149+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:05.917297+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:06.917429+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:07.917564+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:08.917688+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136536064 unmapped: 32718848 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:09.917812+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:10.917978+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:11.918882+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:12.919832+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:13.919994+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:14.920151+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:15.920330+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:16.920424+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:17.920606+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:18.920757+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:19.920892+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:20.921077+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:21.921217+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:22.921356+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:23.921500+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:24.921656+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:25.921810+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:26.921935+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:27.922054+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:28.922173+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:29.922895+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:30.923049+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:31.923205+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:32.923333+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136552448 unmapped: 32702464 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:33.923482+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 32694272 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:34.923601+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 32694272 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:35.923759+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 32694272 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:36.923928+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 32694272 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:37.924060+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 32694272 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:38.924232+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 32694272 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:39.924412+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 32694272 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:40.924624+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 32694272 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:41.924789+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 32694272 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:42.924934+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136560640 unmapped: 32694272 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:43.925111+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 32686080 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:44.925258+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 32686080 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:45.925380+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 32686080 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:46.925534+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 32686080 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:47.925699+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 32686080 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:48.925878+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 32686080 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:49.926048+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 32686080 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:50.926250+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 32686080 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:51.926423+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 32686080 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:52.926573+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 32686080 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:53.926761+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 32686080 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:54.926944+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 32686080 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:55.927092+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 32686080 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:56.927288+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 32686080 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:57.927446+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 32686080 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:58.927609+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 32686080 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:08:59.927785+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 32686080 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:00.928001+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 32686080 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:01.928121+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 32686080 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:02.928295+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136568832 unmapped: 32686080 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:03.928466+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:04.928627+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:05.928762+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:06.928945+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:07.929093+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:08.929264+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:09.929452+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:10.929663+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:11.929871+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:12.930048+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:13.930198+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:14.930362+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:15.930496+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:16.930677+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:17.930870+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:18.931044+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:19.931199+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:20.931446+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:21.931582+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:22.931772+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:23.931927+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:24.932126+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:25.932279+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:26.932428+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:27.932586+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:28.932764+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136577024 unmapped: 32677888 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:29.932934+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 32669696 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:30.933177+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 32669696 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:31.933320+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 32669696 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:32.933458+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 32669696 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:33.933630+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 32669696 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:34.933773+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 32669696 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:35.933952+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 32669696 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:36.934122+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 32669696 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:37.934972+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 32669696 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:38.935096+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 32669696 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:39.935210+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 32669696 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:40.935430+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 32669696 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:41.935646+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 32669696 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:42.935813+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 32669696 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:43.935984+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 32669696 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:44.936148+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 32669696 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:45.936335+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 32669696 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:46.936464+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 32669696 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:47.936644+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136585216 unmapped: 32669696 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:48.936776+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136593408 unmapped: 32661504 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:49.936959+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136593408 unmapped: 32661504 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:50.937138+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136593408 unmapped: 32661504 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:51.937285+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136593408 unmapped: 32661504 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:52.937450+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136593408 unmapped: 32661504 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:53.937617+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136593408 unmapped: 32661504 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:54.937794+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136593408 unmapped: 32661504 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:55.937962+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136593408 unmapped: 32661504 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:56.938130+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136593408 unmapped: 32661504 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:57.938273+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136593408 unmapped: 32661504 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:58.938449+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136593408 unmapped: 32661504 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:09:59.938600+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136593408 unmapped: 32661504 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:00.938772+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136593408 unmapped: 32661504 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:01.938898+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136593408 unmapped: 32661504 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:02.939069+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136593408 unmapped: 32661504 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:03.939229+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136601600 unmapped: 32653312 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:04.939349+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136601600 unmapped: 32653312 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:05.939478+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136601600 unmapped: 32653312 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:06.939647+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136601600 unmapped: 32653312 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:07.939781+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136601600 unmapped: 32653312 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:08.939932+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136601600 unmapped: 32653312 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:09.940041+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136601600 unmapped: 32653312 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:10.940235+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136601600 unmapped: 32653312 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:11.940371+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136601600 unmapped: 32653312 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:12.940522+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136601600 unmapped: 32653312 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:13.940651+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136601600 unmapped: 32653312 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:14.940754+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136601600 unmapped: 32653312 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:15.940878+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct 09 10:10:51 compute-2 ceph-osd[11347]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136601600 unmapped: 32653312 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141163 data_alloc: 234881024 data_used: 9830400
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:16.940975+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:17.941087+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136601600 unmapped: 32653312 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'config diff' '{prefix=config diff}'
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'config show' '{prefix=config show}'
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:18.941202+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136544256 unmapped: 32710656 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'counter dump' '{prefix=counter dump}'
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'counter schema' '{prefix=counter schema}'
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct 09 10:10:51 compute-2 ceph-osd[11347]: osd.2 141 heartbeat osd_stat(store_statfs(0x4f80ff000/0x0/0x4ffc00000, data 0xd72b50/0xe1d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x6cdf9c5), peers [0,1] op hist [])
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:19.941319+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136437760 unmapped: 32817152 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: tick
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_tickets
Oct 09 10:10:51 compute-2 ceph-osd[11347]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2025-10-09T10:10:20.941469+0000)
Oct 09 10:10:51 compute-2 ceph-osd[11347]: prioritycache tune_memory target: 4294967296 mapped: 136372224 unmapped: 32882688 heap: 169254912 old mem: 2845415832 new mem: 2845415832
Oct 09 10:10:51 compute-2 ceph-osd[11347]: do_command 'log dump' '{prefix=log dump}'
Oct 09 10:10:51 compute-2 ceph-mon[5983]: from='client.19014 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:51 compute-2 ceph-mon[5983]: from='client.28871 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:51 compute-2 ceph-mon[5983]: from='client.28627 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:51 compute-2 ceph-mon[5983]: from='client.19038 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:51 compute-2 ceph-mon[5983]: from='client.28895 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:51 compute-2 ceph-mon[5983]: from='client.28901 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:51 compute-2 ceph-mon[5983]: from='client.19062 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:51 compute-2 ceph-mon[5983]: from='client.28928 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:51 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/319661387' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:10:51 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/563879597' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:10:51 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2786971522' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct 09 10:10:51 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1031980098' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:10:51 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/852438262' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:10:51 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1824377817' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct 09 10:10:51 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3301407202' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:10:51 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3495633622' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:10:51 compute-2 rsyslogd[1245]: imjournal from <compute-2:ceph-osd>: begin to drop messages due to rate-limiting
Oct 09 10:10:51 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Oct 09 10:10:51 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/544842412' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 09 10:10:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:51 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:51 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:10:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:10:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:51 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:10:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:52 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:10:52 compute-2 nova_compute[163961]: 2025-10-09 10:10:52.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:52 compute-2 podman[184640]: 2025-10-09 10:10:52.275460735 +0000 UTC m=+0.101047870 container health_status 22e8452015e2c0cb0c66d7f3c4314eb9135608d35c4241230d0c8e19e380a1bc (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Oct 09 10:10:52 compute-2 ceph-mon[5983]: from='client.28666 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:52 compute-2 ceph-mon[5983]: from='client.19095 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:52 compute-2 ceph-mon[5983]: from='client.28949 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:52 compute-2 ceph-mon[5983]: from='client.28952 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:52 compute-2 ceph-mon[5983]: pgmap v1139: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:10:52 compute-2 ceph-mon[5983]: from='client.19122 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:52 compute-2 ceph-mon[5983]: from='client.19128 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:52 compute-2 ceph-mon[5983]: from='client.28732 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:52 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2806679365' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct 09 10:10:52 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1361745880' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 09 10:10:52 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1746788196' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 09 10:10:52 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/544842412' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct 09 10:10:52 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:52 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:52 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:52.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:52 compute-2 nova_compute[163961]: 2025-10-09 10:10:52.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:52 compute-2 crontab[184753]: (root) LIST (root)
Oct 09 10:10:52 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Oct 09 10:10:52 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2369584623' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 09 10:10:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:52 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:52 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:52 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Oct 09 10:10:52 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2283778781' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 09 10:10:53 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Oct 09 10:10:53 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/26126343' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 09 10:10:53 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:53 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:53 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:53.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:53 compute-2 ceph-mon[5983]: from='client.19152 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:53 compute-2 ceph-mon[5983]: from='client.28744 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:53 compute-2 ceph-mon[5983]: from='client.28994 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:53 compute-2 ceph-mon[5983]: from='client.19179 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:53 compute-2 ceph-mon[5983]: from='client.29015 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:53 compute-2 ceph-mon[5983]: from='client.19188 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:53 compute-2 ceph-mon[5983]: from='client.19200 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:53 compute-2 ceph-mon[5983]: from='client.28789 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:53 compute-2 ceph-mon[5983]: from='client.28798 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:53 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/530546311' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 09 10:10:53 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3519033552' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 09 10:10:53 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1702982534' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 09 10:10:53 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2369584623' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct 09 10:10:53 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2677920078' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 09 10:10:53 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2283778781' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 09 10:10:53 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/26126343' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct 09 10:10:53 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2816554851' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 09 10:10:53 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2768663498' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 09 10:10:53 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Oct 09 10:10:53 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3179625250' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 09 10:10:53 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Oct 09 10:10:53 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/651014448' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 09 10:10:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:53 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:53 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Oct 09 10:10:54 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1542477114' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Oct 09 10:10:54 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1922145410' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Oct 09 10:10:54 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3127340658' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.29066 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.19230 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.28828 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.29090 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.19257 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.28843 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: pgmap v1140: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.28858 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.19284 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.28879 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3179625250' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3631243834' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2309428589' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2628658671' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/651014448' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3683853823' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3789920477' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1463422601' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/4174094474' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2091756922' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1542477114' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1922145410' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3824702024' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3185679173' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3029160739' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2029986551' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3127340658' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct 09 10:10:54 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:54 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:54 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:54.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Oct 09 10:10:54 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/423818157' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-2 sudo[185067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/true
Oct 09 10:10:54 compute-2 sudo[185067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Oct 09 10:10:54 compute-2 sudo[185067]: pam_unix(sudo:session): session closed for user root
Oct 09 10:10:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Oct 09 10:10:54 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3952302202' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Oct 09 10:10:54 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1319811705' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 09 10:10:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:54 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:54 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:55 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Oct 09 10:10:55 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2818391916' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 10:10:55 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:55 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:55 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:55.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:55 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Oct 09 10:10:55 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1215003925' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Oct 09 10:10:55 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2133360876' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 09 10:10:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/423818157' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/487013521' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/945918695' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 10:10:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2211374146' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1111936001' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 09 10:10:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3952302202' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Oct 09 10:10:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1319811705' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2098841684' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 09 10:10:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3615703962' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1935202965' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 10:10:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1069066591' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2818391916' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct 09 10:10:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1215003925' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1671176700' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2133360876' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 09 10:10:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/114702688' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/955270682' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Oct 09 10:10:55 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Oct 09 10:10:55 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3828484366' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-2 systemd[1]: Starting Hostname Service...
Oct 09 10:10:55 compute-2 systemd[1]: Started Hostname Service.
Oct 09 10:10:55 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Oct 09 10:10:55 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/292489141' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 09 10:10:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:55 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:55 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:56 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Oct 09 10:10:56 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/2320038838' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 09 10:10:56 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:56 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:10:56 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:56.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:10:56 compute-2 ceph-mon[5983]: pgmap v1141: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:56 compute-2 ceph-mon[5983]: from='client.29261 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:56 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3828484366' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 09 10:10:56 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/754805668' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Oct 09 10:10:56 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2583758200' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 09 10:10:56 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/292489141' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 09 10:10:56 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1279277176' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Oct 09 10:10:56 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2320038838' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 09 10:10:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:56 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:56 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Oct 09 10:10:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Oct 09 10:10:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:56 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Oct 09 10:10:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-nfs-cephfs-1-0-compute-2-cpioam[44409]: 09/10/2025 10:10:57 : epoch 68e78356 : compute-2 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Oct 09 10:10:57 compute-2 nova_compute[163961]: 2025-10-09 10:10:57.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:57 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "versions"} v 0)
Oct 09 10:10:57 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1326728277' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 09 10:10:57 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:57 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:57 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:57.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:57 compute-2 ceph-mon[5983]: from='client.29288 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:57 compute-2 ceph-mon[5983]: from='client.29050 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:57 compute-2 ceph-mon[5983]: from='client.29062 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:57 compute-2 ceph-mon[5983]: from='client.29327 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:57 compute-2 ceph-mon[5983]: from='client.29324 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:57 compute-2 ceph-mon[5983]: from='client.19476 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:57 compute-2 ceph-mon[5983]: from='client.29345 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:57 compute-2 ceph-mon[5983]: from='client.29351 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:57 compute-2 ceph-mon[5983]: from='client.19500 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/3089908545' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 09 10:10:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/163346850' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 09 10:10:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1002807148' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Oct 09 10:10:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/467309719' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:10:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1326728277' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 09 10:10:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3778750636' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Oct 09 10:10:57 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2449000458' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 09 10:10:57 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:57 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:57 compute-2 nova_compute[163961]: 2025-10-09 10:10:57.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Oct 09 10:10:57 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Oct 09 10:10:57 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/1307369467' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 09 10:10:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:57 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:57 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:58 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:58 compute-2 nova_compute[163961]: 2025-10-09 10:10:58.173 2 DEBUG oslo_service.periodic_task [None req-0ea61bb1-f1b7-4374-b62a-b32a26311493 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Oct 09 10:10:58 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:58 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:58 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:10:58.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:58 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.29366 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.29378 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.19527 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.29402 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.29408 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.19545 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: pgmap v1142: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 1 op/s
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.29134 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.29438 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.29450 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.29158 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/4089750137' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.29468 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/1228437802' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.19608 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.29194 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/1307369467' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.29501 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/689574017' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/893082231' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.29215 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.29221 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='client.29540 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:58 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:58 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Oct 09 10:10:58 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3962151522' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 09 10:10:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:58 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:58 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:59 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:10:59 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.000000000s ======
Oct 09 10:10:59 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.102 - anonymous [09/Oct/2025:10:10:59.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Oct 09 10:10:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Oct 09 10:10:59 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4210533807' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 09 10:10:59 compute-2 podman[185835]: 2025-10-09 10:10:59.262194525 +0000 UTC m=+0.098263170 container health_status 2e931b4f92d1fb88926067e5f699b56c691500a6b2d5dcd38f540a60d5a2b460 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Oct 09 10:10:59 compute-2 ceph-mon[5983]: from='client.19677 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct 09 10:10:59 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3962151522' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 09 10:10:59 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Oct 09 10:10:59 compute-2 ceph-mon[5983]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Oct 09 10:10:59 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/371431490' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 09 10:10:59 compute-2 ceph-mon[5983]: from='client.29281 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:59 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3666300404' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Oct 09 10:10:59 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/1989934966' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 09 10:10:59 compute-2 ceph-mon[5983]: pgmap v1143: 337 pgs: 337 active+clean; 41 MiB data, 291 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s
Oct 09 10:10:59 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/4210533807' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 09 10:10:59 compute-2 ceph-mon[5983]: from='client.19728 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:10:59 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/642372660' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 09 10:10:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Oct 09 10:10:59 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/3312017814' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 09 10:10:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct 09 10:10:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Oct 09 10:10:59 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/577281463' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 09 10:10:59 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "df"} v 0)
Oct 09 10:10:59 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2780871879' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 09 10:10:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:10:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:10:59 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:10:59 2025: (VI_0) received an invalid passwd!
Oct 09 10:11:00 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Oct 09 10:11:00 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2590067367' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 09 10:11:00 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Oct 09 10:11:00 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/764128313' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 09 10:11:00 compute-2 radosgw[12043]: ====== starting new request req=0x7fe9c0b415d0 =====
Oct 09 10:11:00 compute-2 radosgw[12043]: ====== req done req=0x7fe9c0b415d0 op status=0 http_status=200 latency=0.001000010s ======
Oct 09 10:11:00 compute-2 radosgw[12043]: beast: 0x7fe9c0b415d0: 192.168.122.100 - anonymous [09/Oct/2025:10:11:00.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000010s
Oct 09 10:11:00 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/3312017814' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 09 10:11:00 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/3882489346' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Oct 09 10:11:00 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/327615863' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 09 10:11:00 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/577281463' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 09 10:11:00 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/2780871879' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Oct 09 10:11:00 compute-2 ceph-mon[5983]: from='client.29627 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Oct 09 10:11:00 compute-2 ceph-mon[5983]: from='client.? 192.168.122.102:0/2590067367' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Oct 09 10:11:00 compute-2 ceph-mon[5983]: from='client.? 192.168.122.100:0/764128313' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Oct 09 10:11:00 compute-2 ceph-mon[5983]: from='client.? 192.168.122.101:0/2134490318' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 09 10:11:00 compute-2 ceph-mon[5983]: mon.compute-2@1(peon) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Oct 09 10:11:00 compute-2 ceph-mon[5983]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/2520245162' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Oct 09 10:11:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-nfs-cephfs-compute-2-dgxvnq[17691]: Thu Oct  9 10:11:00 2025: (VI_0) received an invalid passwd!
Oct 09 10:11:00 compute-2 ceph-286f8bf0-da72-5823-9a4e-ac4457d9e609-keepalived-rgw-default-compute-2-tcjodw[13803]: Thu Oct  9 10:11:00 2025: (VI_0) received an invalid passwd!
